Search Results

Search found 88095 results on 3524 pages for 'code droid'.

Page 383/3524 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Cool SQL formatter tool

    - by AndyScott
    I have to deal with all types of code that was written by people from different organizations, different countries, using different languages, obviously standards are different across these sources.  One of the biggest headaches that I ahve come across is how people differ in the formatting of their SQL statements, specifically stored procs.  When you regularly get over 500 lines in a sproc, if the code is not formatted correctly, you can get lost trying to figure out where one nested BEGIN begins, and another nested END ends.  One of my co-workers showed me this site today that does a pretty damn good job of making sense of that type of code: http://www.dpriver.com/pp/sqlformat.htm.   This is a free website that offers a box to enter your nasty code, and click "Format SQL" and have it clean it for you.  I am sure that there are situations where this may not work, but given the code that I have been working with recently, it does a really good job.  There is a pay version with more options, including VS add-in, desktop component (with quickkeys to clean text in programs like notepad), the ability to output in HTML, and other stuff.  Heck, I watched a demo where the purchased version will take formatted SQL code and turn it into a generic Stringbuilder object embedded in a formatted. Yes, this seems like a shameless plug, but no, I have no relation/knowledge of anyone involved in the development of this product, it just seems useful.  Either way, I recommend checking out the free version.

    Read the article

  • C#.NET vs VB.NET, Which language is better?

    Features I cannot say any language good or bad as long as it's compiler can produce MSIL can run under .NET CLR. If someone says C# has more futures, you can understand that those new features are of C# compiler but not .NET, because if C# has a specific future then CLR cannot understand them. So the new features of C# will have to convert to the code understood by CLR eventually. that means the new features are developed for C# compiler basically to facilitates the developer to write their code in better way. so that means no difference in feature list between C# and VB.NET if you think in CLR perspective. Ease of writing Code I feel writing code in C# is easy, because my background is C and C++, Java, syntaxes very are similar. I assume most developers feel the same. Readability But some people say VB.NET code most readable for the members who are from non technical background, because keywords are generally in English rather special charectors. No of Projects in Market I assume 80 percent of market uses C# in their .NET development. for example in my company many projects are there .nET and all are using C#. Productivity & Experience though the feature list is same, generally developers wants to write code in their familiar languages. because it increase the productivity. Hope this helps to choose the language which suits for you. span.fullpost {display:none;}

    Read the article

  • BDD (Behavior-Driven Development) tools for .Net

    - by tikrimi
    For several years, I use TDD (Test-Driven Development) to produce code. I no longer plans to work without using TDD. The use of TDD significantly increases code quality, but does not guarantee that the code is the code that corresponds to the requirements specifications (write the "right code" with BDD as opposed to the write "code right" with BDD). Dan North has described in an article in published in 2006 the foundations of the BDD (Behavior-Driven Development). In this article, he introduces the formalism "When Given Then". This formalism is used in all tools dedicated to BDD. This is a short list of open source BDD tools that you can use with .Net : SpecFlow: Here you can find an article in MSDN Magazine and 2 webcasts (http://channel9.msdn.com/posts/ASPNET-MVC-With-Community-Tools-Part-2-Spec-Flow-and-WatiN and http://channel9.msdn.com/posts/ASPNET-MVC-With-Community-Tools-Part-3-More-Spec-Flow-and-WatiN) published on Chanel9. NSpec: This is certainly the most used project. There are many examples on the web. StoryQ: This project is hosted on Codeplex. This small project is very simple to implement and very useful.

    Read the article

  • When too much encapsulation was reached

    - by Samuel
    Recently, I read a lot of gook articles about how to do a good encapsulation. And when I say "good encapsulation", I don't talk about hiding private fields with public properties; I talk about preventing users of your Api to do wrong things. Here is two good articles about this subject: http://blog.ploeh.dk/2011/05/24/PokayokeDesignFromSmellToFragrance.aspx http://lostechies.com/derickbailey/2011/03/28/encapsulation-youre-doing-it-wrong/ At my job, the majority a our applications are not destined to other programmers but rather to the customers. About 80% of the application code is at the top of the structure (Not used by other code). For this reason, there is probably no chance ever that this code will be used by other application. An example of encapsulation that prevent user to do wrong thing with your Api is to return an IEnumerable instead of IList when you don't want to give the ability to the user to add or remove items in the list. My question is: When encapsulation could be considered like too much of purism object oriented programming while keeping in mind that each hour of programming is charged to the customer? I want to do good code that is maintainable, easy to read and to use but when this is not a public Api (Used by other programmer), where could we put the line between perfect code and not so perfect code? Thank you.

    Read the article

  • FIX: Visual Studio Post Build Event Returns &ndash;1 when it should not.

    - by ChrisD
    I had written a Console Application that I run as part of my post build for other projects..  The Console application logs a series of messages to the console as it executes.  I use the Environment.ExitCode value to specify an error or success condition.  When the application executes without issue, the ExitCode is 0, when there is a problem its –1. As part of my logging, I log the value of the exit code right before the application terminates.  When I run this executable from the command line, it behaves as it should; error scenarios return –1 and success scenarios return 0.   When I run the same command line as part of the post-build event, Visual Studio reports the exit code as –1, even when the application reports the exit code as 0.   A snippet of the build output follows: Verbose: Exiting with ExitCode=0 C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073: The command ""MGC.exe" "-TargetPath=C:\TFS\Solutions\Research\Source\Framework\Services\Identity\STS\_STSBuilder\bin\Debug\_STSBuilder.dll" C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073:  C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073: " exited with code -1. The Application returns a 0 exit code.  But visual studio is reporting an error.  Why? The answer is in the way I format my log messages.  Apparently Visual Studio watches the messages that get streamed to the the output console.  If those messages match a pattern used by visual studio to communicate errors, Visual Studio assumes an error has occurred in the executable and returns a –1.  This post details the formats used by Visual Studio to determine error conditions. In my case, the presence of the colon was tripping up Visual studio.  I Replaced all occurrences of colon with an equal sign and Visual Studio once again respected the exit code of the application. Verbose= Exiting with ExitCode=0 ========== Build: 3 succeeded or up-to-date, 0 failed, 0 skipped ==========

    Read the article

  • Patterns & Practices: Composite Services CTP2 is Public

    - by HernanDL
    Finally the last CTP and pre-release version for the Composite Services is out. There were quite a lot of changes since CTP1. We added many new samples and many enhancements to the repository (DB) which is now called Inventory in sync with SOA Patterns. Here is a brief list of the main changes according to the included documentations.   Changes and additions in this release This CTP release contains reusable source code and samples to illustrate implementation for the following patterns and scenarios: Repair and Resubmit – this pattern is implemented in ESB Toolkit 2.0 as part of Exception Management Framework (EMF). This code drop provides code sample how to implement this pattern for Windows AppFabric workflow service, using Exceptions Web Service and workflow activities to create fault message, which will be created in EMF database.  Analytic Tracing – this code drop contains reusable code and samples for implementing ETW tracing: event collector service and database that store collected events. This capability may be used for scenarios that need flexibility on how collected events are decoded and processed via extensibility points you can configure and implement:  plugins and event decoders with leveraging ETW tracing capabilities provided by the event collector service.   Inventory Centralization – this code drop contains service catalog database, web services and samples to show how to implement Metadata Centralization, Schema Centralization and Policy Centralization patterns.  Service Virtualization – we included sample for implementing this pattern using WCF routing service( which is part of .NET framework) and service metadata centralization capabilities to define routing service metadata in service catalog. Termination Notification – we included sample for implementing this pattern using sample WCF service and policy centralization capabilities provided by this CTP release.   You will also find many new videos that will be uploaded to the home page any time soon. Stay tunned for new posts regarding implemetation details and advanced customizations for custom policy exporters/importers and monitoring.

    Read the article

  • "Integratable" but not "integrated" GPL

    - by mgibsonbr
    There has been much debate over whether or not merely linking to a piece of code makes it a derivative work. I know FSF says "yes", so according to them I can't dynamically link a non-GPL compatible program to a GPL library and distribute the whole. But I could do that for private use, as long as no code is released to the public. That made me wonder: what if I don't redistribute the GPL code at all? If my program can work alone (reinforcing my claim that it's not a derivative work), but can do more if the GPL library is also installed to the system, couldn't I just release my application under my own licensing terms - without including any GPL code - and post instructions for anyone interested to separately download the GPL code and do the integration "for their private use"? I know it's against the "spirit" of the GPL, so I'm not suggesting it's a good idea to do that. However, this question is bugging me for some time, specially because of the implications of each answer: If I can not do that: can I write another library with a similar API? (before answering "of course you can", remember that having the same API would allow both libraries to be swapped at will by my customers - so I don't need to work too hard on my library or even make it "working". How to determine if a similar program is just similar or is a circumvention attempt?) If I can do that: can I also be paid to perform the service of installing the GPL library for a customer? (I sell them my program, install it in their machines, download and install the GPL library too) can I put the two programs in the same website? In two different CDs? (I know I said the idea was not to redistribute the GPL code, I'm just thinking in excuses people could use to claim they're not redistributing even though they are)

    Read the article

  • Install Adobe AIR on Ubuntu/Linux

    Since quite some time Adobe Technologies released the Linux version of Adobe AIR to bring web applications and widgets to your desktop. Installing new applications on a Linux system is not always as easy as switching the computer on. The following instructions might be helpful to install Adobe AIR on any Linux system. First of all, get the latest installer of Adobe AIR from http://get.adobe.com/air/ - as of writing this article the file name is AdobeAIRInstaller.bin. Save the download in your preferred folder. Now, there are two ways to run the installer - visual style or console style. Visual Installation Launch your favorite or standard file manager like thunar or nautilus and browse to the folder where the AdobeAIRInstaller.bin has been saved. Right click on the file and choose 'Properties' in the context menu Set 'Execute' permissions and confirm modifications with OK Rename file into AdobeAIRInstaller Double click and follow the instructions Using the console Open a terminal like xterm Change into the directory where you stored the download Run this command:[code]chmod +x AdobeAIRInstaller.bin[/code] Now run this command:[code]sudo ./AdobeAIRInstaller.bin[/code] The normal installer will open, install it. From now whenever you download a .air file, just double click it and it will be installed. Troubleshooting In case that the installation does not start properly, try to install via console. This gives you more details about the reasons. Should you run into something like this: [code]AdobeAIRInstaller.bin: 1: Syntax error: "(" unexpected[/code] Double check the execute permission of the installer file and try again.

    Read the article

  • Opportunities for Partners with Oracle in the Public Sector - Live Webcast, January 18th

    - by Paulo Folgado
    LIVE WEBCAST - OPPORTUNITIES FOR PARTNERS WITH ORACLE IN THE PUBLIC SECTORTUESDAY, JANUARY 18th, 2011Learn about Oracle's industry strategy for the Public Sector and resources available to partners.  Each webcast will include information and answers to questions from Oracle's public sector leadership for that region.AGENDA·         Oracle's Public Sector Industry Strategy·         Oracle Partner Network (OPN) Overview·         Public Sector Knowledge Zone·         Specialization·         Oracle Validated Integration·         Solution Catalog·         How to engage with Oracle·         Questions and AnswersWEBCAST SCHEDULE AND LOGISTICSPlease attend the webcast for your region on Tuesday, January 18th: Region Time Web Conference Details*Please join both the Web and Audio conferences Audio Details Asia / Pacific 10:00 AM Signapore 1:00 PM Sydney 2:00 AM GMT  https://enablement20.webex.com/Session Number: 592 054 744Password: Oracle123 International Toll Free Dial-inConference Code: 2739403Security Pass code: 12345 EuropeMiddle EastAfrica  1:00 PM GMT/London https://enablement20.webex.com/Session Number: 596 548 609Password: Oracle123 International Toll Free Dial-inConference Code: 2739403Security Pass code: 12345 Americas  1:00 PM Eastern10:00 AM Pacific 6:00 PM GMT https://enablement20.webex.com/Session Number: 597 728 102Password: Oracle123 International Toll Free Dial-inConference Code: 2739403Security Pass code: 12345 VISIT THE PUBLIC SECTOR KNOWLEDGE ZONEClick Here to access the Knowledge Zone.  Copyright © 2011, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement      

    Read the article

  • Pythonic Java. Yes, or no?

    - by OscarRyz
    Python use of indentation for code scope was initially very polemic and now is considered one of the best language features, because it helps ( almost by forcing us ) to have a consistent style. Well, I saw this post http://bit.ly/hmvTe9 where someone posted Java code with ; y {} aligned to the right margin to look more pythonic. It was very shocking at first ( as a matter of fact, if I ever see Java code like that in one of my projects I would be scared! ) However, there is something interesting here. Do we need all those braces and semicolons? How would the code would look like without them? class Person int age void greet( String a ) if( a == "" ) out.println("Hello stranger") else out.printf("Hello %s%n", a ) int age() return this.age class Main void main() new Person().greet("") Looks good to me, but in such small piece of code is hard to appreciate it, and since I don't Python too much, I can't tell by looking at existing libraries if it would be cleaner or not. So I took the first file of a library named: jAlarms I found and this is the result: ( WARNING : the following image may be disturbing for some people ) http://pxe.pastebin.com/eU1R4xsh Obviously it doesn't compile. This would be a compiling version using right aligned {} and ; http://pxe.pastebin.com/2uijtbYM Question What would happen if we could code like this? Would it make things clearer? Would it make it harder? I see braces, and semicolons as help to the parser and we, as humans have get used to them, but do we really need them? I guess is hard to tell specially since many mainstream languages do use braces, C, C++, Java, C# JavaScript Assuming the compiler wouldn't have problems without them, would you use them? Please comment.

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with a mixed languages. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options?

    Read the article

  • Should one generally develop a client library for REST services to help prevent API breakages?

    - by BestPractices
    We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer. I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted. Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.

    Read the article

  • Learning from jQuery - Solid fundament for experienced jQuery developers

    Frankly speaking, I had to sleep a night over before typing this review. And even now it is not an easy, straight-forward task to write this recension. I'm not sure whether I'm the right kind of audience this title is actually addressed to. It clearly states that this book is for web developers which are very familiar with jQuery library but would like to extend their knowledge to vanilla JavaScript. Not being part of this particular group it felt strange to go through the various chapters after all. This title is clearly addressed to experienced jQuery users and developers especially while looking for improvements in performance and better ways of optimisations. Sometimes just to simplify the existing jQuery code in order to avoid the heavy load of the complete jQuery library and sometimes for the better understanding of JavaScript and its syntax. Callum's style of writing is clear and the numerous code samples used to emphasize the various techniques are good ones and easy to understand. Quite interestingly, it put a light smile on my face when I compared his sample code of sending an AJAX request to some code in one of my own blog articles I wrote back in 2006 (in German language). JavaScript is clearly a mature language and certain requirements are simply done this way. And Callum explains the nuts and bolts of JavaScript very well. Personally, I gained most out of this book from chapter 5 - JavaScript Conventions. The paragraphs and code snippets on Optimizations and Common Antipatterns gave me a better understanding on various aspects of JavaScript development, and I definitely have to revise a couple of code fragments I have written in the past. Overall the book provides solid information on JavaScript for jQuery developers and is worth the money spent. Just be sure that you're part of the targeted audience.

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • What's the best structure for a repository?

    - by jpmelos
    I've looked into many open source software repositories, and I've found some common elements and somethings people do in different fashion from one another. For example, every repository has a README file, a INSTALL file, a COPYING file and stuff like that. Other things differ: Some projects, like git, have their source code in the root level, while others have the source code in a src/ folder and others, like the Linux kernel, have the source code spread in different folders in root level, that divide code by areas; Some have their tests in a t/ folder, while others in a tests/ folder, or named otherwise; Some have files about submitting patches and who the maintainers are, and those might be inside some Documentation/ or in the root level. Are there recommendations? A best practice? For example: personally, I don't like the code in the root level, git-fashion. It looks messy and confuses one trying to start as a contributor (especially because they have some code inside folders, and scripts in the root level as well, it's really messy). If I were to start a project of my own and wanted to start right from the start, are there recommendations? Best practices? How can I make a clean and clear structure? Thank you!

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • CheckMemoryAllocationGame Sample

    - by Michael B. McLaughlin
    Many times I’ve found myself wondering how much GC memory some operation allocates. This is primarily in the context of XNA games due to the desire to avoid generating garbage and thus triggering a GC collection. Many times I’ve written simple programs to check allocations. I did it again recently. It occurred to me that many XNA developers find themselves asking this question from time to time. So I cleaned up my sample and published it on my website. Feel free to download it and put it to use. It’s rather thoroughly commented. The location where you insert the code you wish to check is in the Update method found in Game1.cs. The default that I put in is a line of code that generates a new Guid using Guid.NewGuid (which, if you’re curious, does not create any heap allocations). Read all of the comments in the Update method (at the very least) to make sure that your code is measured properly. It’s important to make sure that you meaningfully reference any thing you create after the second call to get the memory or else (in Release configuration at least) you will likely get incorrect results. Anyway, it should make sense when you read the comments and if not, feel free to post a comment here or ask me on Twitter. You can find my utilities and code samples page here: http://www.bobtacoindustries.com/developers/utils/Default.aspx To download CheckMemoryAllocationGame’s source code directly: http://www.bobtacoindustries.com/developers/utils/CheckMemoryAllocationGame.zip (If you’re looking to do this outside of the context of an XNA game, the measurement code in the Update method can easily be adapted into, e.g., a C# Windows Console application. In the past I mostly did that, actually. But I didn’t feel like adding references to all the XNA assemblies this time and… anyway, if you want you can easily convert it to a console application. If there’s any demand for it, I’ll do it myself and update this post when I get a chance.)

    Read the article

  • My Visual Studio Demo Video Link disappeared &ndash; How do I get it back?

    - by Tarun Arora
    ***Special thanks to Adam Cogan for asking this question and to Andrew Bragdon for answering this question on the ALM Champs list.*** 1. Problem – The link to demo videos will disappear once you have watched the video Learning Visual Studio has become easier than ever with the Visual Studio How to Videos hosted inside of Visual Studio showing up in the context of the task you are trying to achieve. For instance when you click code review in team explorer you can see the link “Streaming Video: Using Code Review to improve quality” when you click this link the video stream is delivered to you right with in Visual Studio. Next time you run Visual Studio you will notice that the home page has a check mark in the video “Using Code Review to improve quality”. If you navigate to code review in the myWork hub in the team explorer, you will notice that the link “Streaming Video: Using Code Review to improve quality” does not show up any more.         2. Solution – How to get the Demo Videos link back Warning: Editing the registry can lead to serious problems if not done correctly.  Always backup your registry before editing. This solution is neither suggested nor supported by Microsoft. Type regedit on the run command prompt to open the Registry editor Navigate to the path Computer\HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\UltimateStartPage\VideoState and notice the newly created folder “TeamExplorer.CodeReview”, notice the key Watched is set to 1.         Change the value of the key ‘Watched’ to 0 Restart Visual Studio and Navigate to Code Review in myWork hub and voila, the link to stream the video is back!            Watch and enjoy the Demo videos to your hearts content!

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • How would you TDD the functionality of getting the corresponding process of a running windows service?

    - by Matt Spinelli
    Purpose Over the last year or more I've been learning unit testing via books I've read recently like The Art of Unit Testing, Working Effectively with Legacy Code, and others. I've also been using unit tests, mocking frameworks, and the like, periodically at work and definitely see the value. However, I'm still having a hard time wrapping my mind around TDD (as opposed to TAD) when the situation calls for code that is gong to mostly use external API calls. Problem to solve Get the process associated with a windows service using the service name. example: Function GetProcess(ByVal serviceName As String) As Process Rules Show each major iteration in production & test code using TDD No need to see any other code or configuration that is required to get things to run. Just curious about the interfaces, concrete classes, and test methods. C# or VB.NET Must use the .Net framework regarding services/processes (i.e. System.Diagnostics.Process) Test Frameworks: Nunit or MSTest Isolation Frameworks: Moq, Rhino Mock, or Microsoft Moles Must write true unit tests (no integration tests) Additional notes As far as I can tell there are two approaches design wise. Use an Inversion of Control approach along with using the Adapter and/or Facade patterns to wrap the underlying .net framework objects dealing with processes and services. Keep the .net framework code in the class containing the Get Process method and use code detouring (interception) via Microsoft Moles to isolate the hard dependencies from the method under test.

    Read the article

  • Another good free utility - Campwood Software Source Monitor

    - by TATWORTH
    The Campwoood Source Monitor at http://www.campwoodsw.com/sourcemonitor.html  says in its introduction "The freeware program SourceMonitor lets you see inside your software source code to find out how much code you have and to identify the relative complexity of your modules. For example, you can use SourceMonitor to identify the code that is most likely to contain defects and thus warrants formal review. SourceMonitor, written in C++, runs through your code at high speed, typically at least 10,000 lines of code per second." It is indeed very high-speed and is useful as it: Collects metrics in a fast, single pass through source files. Measures metrics for source code written in C++, C, C#, VB.NET, Java, Delphi, Visual Basic (VB6) or HTML. Includes method and function level metrics for C++, C, C#, VB.NET, Java, and Delphi. Offers Modified Complexity metric option. Saves metrics in checkpoints for comparison during software development projects. Displays and prints metrics in tables and charts, including Kiviat diagrams. Operates within a standard Windows GUI or inside your scripts using XML command files. Exports metrics to XML or CSV (comma-separated-value) files for further processing with other tools.

    Read the article

  • C++ and system exceptions

    - by Abyx
    Why standard C++ doesn't respect system (foreign or hardware) exceptions? E.g. when null pointer dereference occurs, stack isn't unwound, destructors aren't called, and RAII doesn't work. The common advice is "to use system API". But on certain systems, specifically Win32, this doesn't work. To enable stack unwinding for this C++ code // class Foo; // void bar(const Foo&); bar(Foo(1, 2)); one should generate something like this C code Foo tempFoo; Foo_ctor(&tempFoo); __try { bar(&tempFoo); } __finally { Foo_dtor(&tempFoo); } Foo_dtor(&tempFoo); and it's impossible to implement this as C++ library. Upd: Standard doesn't forbid handling system exceptions. But it seems that popular compilers like g++ doesn't respect system exceptions on any platforms just because standard doesn't require this. The only thing that I want - is to use RAII to make code readable and program reliable. I don't want to put hand-crafted try\finally around every call to unknown code. For example in this reusable code, AbstractA::foo is such unknown code: void func(AbstractA* a, AbstractB* b) { TempFile file; a->foo(b, file); } Maybe one will pass to func such implementation of AbstractA, which every Friday will not check if b is NULL, so access violation will happen, application will terminate and temporary file will not be deleted. How many months uses will suffer because of this issue, until either author of func or author of AbstractA will do something with it? Related: Is `catch(...) { throw; }` a bad practice?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >