Search Results

Search found 87932 results on 3518 pages for 'code vader'.

Page 21/3518 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Creating a new Guid inside a code snippet using c#

    - by Rob
    I want to make an intellisense code snippet using Ctl K + Ctl X that actually executes code when it runs... for example, I would like to do the following: <![CDATA[string.Format("{MM/dd/yyyy}", System.DateTime.Now);]]> But rather than giving me that string value, I want the date in the format specified. Another example of what I want is to create a new Guid but truncate to the first octet, so I would want to use a create a new Guid using System.Guid.NewGuid(); to give me {798400D6-7CEC-41f9-B6AA-116B926802FE} for example but I want the value: 798400D6 from the code snippet. I'm open to not using an Intellisense Code Snippet.. I just thought that would be easy.

    Read the article

  • QR code - where can I find (free) code to embed my own generator on a web page?

    - by Robbert Huisman
    Hi, couldn't find it exactly from earlier questions, but I am probably repeating an earlier question, so apologies upfront ;-) I am looking for a simple code to embed a QR 2D code generator on a website I am building. I assume their should be some free open source code for that but I could only find paid software. Can anyone point me in the right direction? would be mostly appreciated! best regards, Robbert

    Read the article

  • any ideas for avoiding duplicate code in C# and javascript

    - by ooo
    i have an asp.net mvc website where i build up most of the page using C# for example building up html tables given a set of data from my viewmodel i also have a lot of javascript that then dynamcially modifies these tables (add row for example). the javascript code to add a new row looks extremely similar to my "rendering" code i have in C# that is used to build up the html table in the first place. Every time i change the c# code to a add a new field, i have to remember to go back to the javascript code to do the same. is there a better way here?

    Read the article

  • Latex + Source Code Import

    - by KP65
    Hi guys, I'm using latex to write a program listing of all my code and am following this: http://texblog.wordpress.com/2008/04/02/include-source-code-in-latex-with-listings/ It works, but my code runs of the side of the page. How can i fix this? Thanks

    Read the article

  • Listing C/C++ functions (Code analysis in Unix)

    - by Jond
    Whether we're maintaining unfamiliar code or checking out the implementation details of an Apache module it can help if we can quickly traverse the code and build up an overview of what we're looking at. Grep serves most of my daily needs but there are some cases where it just wont do. Here's a common example of how it can help. To find the definition of a PHP function I'm interested in I can type this at the command line: grep -r "function myfunc" . This could be adapted very quickly to C or C++ if we know the return type, but things become more complicated if, say, I want to list every method that my class provides: grep "function " ./src/mine.class.php Since there's no single keyword that denotes a function or method in C++ and because it's generally more complex syntax, I think I'd need some kind of static code analysis tool, smart use of the C Preprocessor or blind faith the coder followed strict code guidelines (# of whitespace, position of curlies etc) to get these sorts of results. What would you recommend? p.s. be nice, this is my first post ;-) :p

    Read the article

  • What’s the use of code reuse?

    - by Tony Davis
    All great developers write reusable code, don’t they? Well, maybe, but as with all statements regarding what “great” developers do or don’t do, it’s probably an over-simplification. A novice programmer, in particular, will encounter in the literature a general assumption of the importance of code reusability. They spend time worrying about DRY (don’t repeat yourself), moving logic into specific “helper” modules that they can then reuse, agonizing about the minutiae of the class structure, inheritance and interface design that will promote easy reuse. Unfortunately, writing code specifically for reuse often leads to complicated object hierarchies and inheritance models that are anything but reusable. If, instead, one strives to write simple code units that are highly maintainable and perform a single function, in a concise, isolated fashion then the potential for reuse simply “drops out” as a natural by-product. Programmers, of course, care about these principles, about encapsulation and clean interfaces that don’t expose inner workings and allow easy pluggability. This is great when it helps with the maintenance and development of code but how often, in practice, do we actually reuse our code? Most DBAs and database developers are familiar with the practical reasons for the limited opportunities to reuse database code and its potential downsides. However, surely elsewhere in our code base, reuse happens often. After all, we can all name examples, such as date/time handling modules, which if we write with enough care we can plug in to many places. I spoke to a developer just yesterday who looked me in the eye and told me that in 30+ years as a developer (a successful one, I’d add), he’d never once reused his own code. As I sat blinking in disbelief, he explained that, of course, he always thought he would reuse it. He’d often agonized over its design, certain that he was creating code of great significance that he and other generations would reuse, with grateful tears misting their eyes. In fact, it never happened. He had in his head, most of the algorithms he needed and would simply write the code from scratch each time, refining the algorithms and tailoring the code to meet the specific requirements. It was, he said, simply quicker to do that than dig out the old code, check it, correct the mistakes, and adapt it. Is this a common experience, or just a strange anomaly? Viewed in a certain light, building code with a focus on reusability seems to hark to a past age where people built cars and music systems with the idea that someone else could and would replace and reuse the parts. Technology advances so rapidly that the next time you need the “same” code, it’s likely a new technique, or a whole new language, has emerged in the meantime, better equipped to tackle the task. Maybe we should be less fearful of the idea that we could write code well suited to the system requirements, but with little regard for reuse potential, and then rewrite a better version from scratch the next time.

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

  • GPL'ing code of a third party?

    - by Mark
    I am facing the following dilemma at the moment. I am using code from a scientific paper in a commercial project. So basically I copied and pasted the code from the paper's pdf into my code editor and use it in my own code. The code in the paper does not have any copy restrictions or license(like the GPL) so I thought I would be ok using it in a commercial project. However, I have seen several gpl licensed open source projects that use the exact same code from the paper to the point of having the same variable names like in the paper. So what happened here is that a gpl license was put on a third parties non gpl'ed code. Are these open source projects in violation of the gpl or would I be in violation of the gpl because I use code which has been gpl'ed? My common sense tells me it is not allowed to gpl somebody elses non-gpl'ed (like in this case from the paper) code but I though I would ask anyway.

    Read the article

  • Is there a way to check if redistributed code has been altered?

    - by onlineapplab.com
    I would like to redistribute my app (PHP) in a way that the user gets the front end (presentation) layer which is using the API on my server through a web service. I want the user to be able to alter his part of the app but at the same time exclude such altered app from the normal support and offer support on pay by the hour basis. Is there a way to check if the source code was altered? Only solution I can think of would be to get check sums of all the files then send it through my API and compare them with the original app. Is there any more secure way to do it so it would be harder for the user to break such protection?

    Read the article

  • "// ..." comments at end of code block after } - good or bad?

    - by gablin
    I've often seen such comments be used: function foo() { ... } // foo while (...) { ... } // while if (...) { ... } // if and sometimes even as far as if (condition) { ... } // if (condition) I've never understood this practice and thus never applied it. If your code is so long that you need to know what this ending } is then perhaps you should consider splitting it up into separate functions. Also, most developers tools are able to jump to the matching bracket. And finally the last is, for me, a clear violation to the DRY principle; if you change the condition you would have to remember to change the comment as well (or else it could get messy for the maintainer, or even for you). So why do people use this? Should we use it, or is it bad practice?

    Read the article

  • Who owns the code, who owns the algorithm, who owns the idea?

    - by Vorac
    This question got me thinking what products of the programming effort belong to the employer, and what don't. The two extremes are (0) the code - it apparently belongs to the employer and (1) the learned personal and technical skills. But what is in between? Who owns the pseudocode/algorithm? Who owns the general idea of the algorithm? Who owns the know-how that such an algorithm may serve some useful purpose (e.g. on this site questions are values, as well as answers)? Also: Who owns an idea on the web?

    Read the article

  • "// ..." comments at end of code block after } - good or bad?

    - by gablin
    I've often seen such comments be used: function foo() { ... } // foo while (...) { ... } // while if (...) { ... } // if and sometimes even as far as if (condition) { ... } // if (condition) I've never understood this practice and thus never applied it. If your code is so long that you need to know what this ending } is then perhaps you should consider splitting it up into separate functions. Also, most developers tools are able to jump to the matching bracket. And finally the last is, for me, a clear violation to the DRY principle; if you change the condition you would have to remember to change the comment as well (or else it could get messy for the maintainer, or even for you). So why do people use this? Should we use it, or is it bad practice?

    Read the article

  • Cannot redeclare class error when generating PHPUnit code coverage report

    - by Cobby
    Starting a project with Zend Framework 1.10 and Doctrine 2 (Beta1). I am using namespaces in my own library code. When generating code coverage reports I get a Fatal Error about Redeclaring a class. To provide more info, I've commented out the xdebug_disable() call in my phpunit executable so you can see the function trace (disabled local variables output because there was too much output). Here's my Terminal output: $ phpunit PHPUnit 3.4.12 by Sebastian Bergmann. ........ Time: 4 seconds, Memory: 16.50Mb OK (8 tests, 14 assertions) Generating code coverage report, this may take a moment.PHP Fatal error: Cannot redeclare class Cob\Application\Resource\HelperBroker in /Users/Cobby/Sites/project/trunk/code/library/Cob/Application/Resource/HelperBroker.php on line 93 PHP Stack trace: PHP 1. {main}() /usr/local/zend/bin/phpunit:0 PHP 2. PHPUnit_TextUI_Command::main() /usr/local/zend/bin/phpunit:54 PHP 3. PHPUnit_TextUI_Command-run() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:146 PHP 4. PHPUnit_TextUI_TestRunner-doRun() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:213 PHP 5. PHPUnit_Util_Report::render() /usr/local/zend/share/pear/PHPUnit/TextUI/TestRunner.php:478 PHP 6. PHPUnit_Framework_TestResult-getCodeCoverageInformation() /usr/local/zend/share/pear/PHPUnit/Util/Report.php:97 PHP 7. PHPUnit_Util_Filter::getFilteredCodeCoverage() /usr/local/zend/share/pear/PHPUnit/Framework/TestResult.php:623 Fatal error: Cannot redeclare class Cob\Application\Resource\HelperBroker in /Users/Cobby/Sites/project/trunk/code/library/Cob/Application/Resource/HelperBroker.php on line 93 Call Stack: 0.0004 322888 1. {main}() /usr/local/zend/bin/phpunit:0 0.0816 4114628 2. PHPUnit_TextUI_Command::main() /usr/local/zend/bin/phpunit:54 0.0817 4114964 3. PHPUnit_TextUI_Command-run() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:146 0.1151 5435528 4. PHPUnit_TextUI_TestRunner-doRun() /usr/local/zend/share/pear/PHPUnit/TextUI/Command.php:213 4.2931 16690760 5. PHPUnit_Util_Report::render() /usr/local/zend/share/pear/PHPUnit/TextUI/TestRunner.php:478 4.2931 16691120 6. PHPUnit_Framework_TestResult-getCodeCoverageInformation() /usr/local/zend/share/pear/PHPUnit/Util/Report.php:97 4.2931 16691148 7. PHPUnit_Util_Filter::getFilteredCodeCoverage() /usr/local/zend/share/pear/PHPUnit/Framework/TestResult.php:623 (I have no idea why it shows the error twice...?) And here is my phpunit.xml: <phpunit bootstrap="./code/tests/application/bootstrap.php" colors="true"> <!-- bootstrap.php changes directory to trunk/code/tests, all paths below are relative to this directory. --> <testsuite name="My Promotions"> <directory>./</directory> </testsuite> <filter> <whitelist> <directory suffix=".php">../application</directory> <directory suffix=".php">../library/Cob</directory> <exclude> <!-- By adding the below line I can remove the error --> <file>../library/Cob/Application/Resource/HelperBroker.php</file> <directory suffix=".phtml">../application</directory> <directory suffix=".php">../application/doctrine</directory> <file>../application/Bootstrap.php</file> <directory suffix=".php">../library/Cob/Tools</directory> </exclude> </whitelist> </filter> <logging> <log type="junit" target="../../build/reports/tests/report.xml" /> <log type="coverage-html" target="../../build/reports/coverage" charset="UTF-8" yui="true" highlight="true" lowUpperBound="50" highLowerBound="80" /> </logging> </phpunit> I have added a tag inside the which seams to hide this problem. I do have another application resource but it doesn't seam to have a problem (the other one is a Doctrine 2 resource). I'm not sure why it is specific to this class, my entire library is autoloaded so their isn't any include/require calls anywhere. I guess it should be noted that HelperBroker is the first file in the filesystem stemming out from library/Cob I am on Snow Leopard with the latest/recent versions of all software (Zend Server, Zend Framework, Doctrine 2 Beta1, Phing, PHPUnit, PEAR).

    Read the article

  • How to play a embedded code in lightbox type popup

    - by Fero
    Hi all How to play a embedded code in lightbox type pop up? Here is the whole code <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="imagetoolbar" content="no" /> <title>FancyBox 1.3.1 | Demonstration</title> <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="./fancybox/jquery.mousewheel-3.0.2.pack.js"></script> <script type="text/javascript" src="./fancybox/jquery.fancybox-1.3.1.js"></script> <link rel="stylesheet" type="text/css" href="./fancybox/jquery.fancybox-1.3.1.css" media="screen" /> <link rel="stylesheet" href="style.css" /> <script type="text/javascript"> $(document).ready(function() { /* * Examples - images */ $("a#example1").fancybox({ 'titleShow' : false }); $("a#example2").fancybox({ 'titleShow' : false, 'transitionIn' : 'elastic', 'transitionOut' : 'elastic' }); $("a#example3").fancybox({ 'titleShow' : false, 'transitionIn' : 'none', 'transitionOut' : 'none' }); $("a#example4").fancybox(); $("a#example5").fancybox({ 'titlePosition' : 'inside' }); $("a#example6").fancybox({ 'titlePosition' : 'over' }); $("a[rel=example_group]").fancybox({ 'transitionIn' : 'none', 'transitionOut' : 'none', 'titlePosition' : 'over', 'titleFormat' : function(title, currentArray, currentIndex, currentOpts) { return '<span id="fancybox-title-over">Image ' + (currentIndex + 1) + ' / ' + currentArray.length + (title.length ? ' &nbsp; ' + title : '') + '</span>'; } }); /* * Examples - various */ $("#various1").fancybox({ 'titlePosition' : 'inside', 'transitionIn' : 'none', 'transitionOut' : 'none' }); $("#various2").fancybox(); $("#various3").fancybox({ 'width' : '75%', 'height' : '75%', 'autoScale' : false, 'transitionIn' : 'none', 'transitionOut' : 'none', 'type' : 'iframe' }); $("#various4").fancybox({ 'padding' : 0, 'autoScale' : false, 'transitionIn' : 'none', 'transitionOut' : 'none' }); }); </script> </head> <body> <div id="content"> <p> <a id="example1" href="./example/1_b.jpg"><img alt="example1" src="./example/1_s.jpg" /></a> <a id="example2" href="./example/2_b.jpg"><img alt="example2" src="./example/2_s.jpg" /></a> <a id="example3" href="./example/3_b.jpg"><img alt="example3" src="./example/3_s.jpg" /></a> </p> </div> <div><p>&nbsp;</p></div> </body> </html> This above code working for image perfectly. But how shall i play the embedded code instead of image. Here is the sample embedded code. <object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/WUW5g-sL8pU&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/WUW5g-sL8pU&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object> thanks in advance...

    Read the article

  • Collocation in Code

    - by Dan McGrath
    Quite some time ago I remember reading an article from 'Joel on Software' that mentioned collocation of information in code was important. By collocation, I mean that relevant information about the code is present when the code is. I'm currently writing an article that has a small bit in it about collocation so I went searching for sources and found the quote in the article 'Making Wrong Code Look Wrong' In order to make code really, really robust, when you code-review it, you need to have coding conventions that allow collocation. In other words, the more information about what code is doing is located right in front of your eyes, the better a job you’ll do at finding the mistakes. When you have code that says For me, collocation isn't just about the code itself, but the tool used to view the code. If it can help with the 'collocation factor' (term coined by me?) I believe it can help with the programmers productivity. Take for example the modern IDEs that show you the variables type by hovering over it. Are their any other articles written about collocation in code and/or are their other terms that this is known by?

    Read the article

  • Static classes and/or singletons -- How many does it take to become a code smell?

    - by Earlz
    In my projects I use quite a lot of static classes. These are usually classes that naturally seem to fit into a single-instance type of thing. Many times I use static classes and recently I've started using some singletons. How many of these does it take to become a code smell? For instance, in my recent project which has a lot of static classes is an Authentication library for ASP.Net. I use a static class for a helper class that fixes ASP.Net error codes so it can be used like CustomErrorsFixer.Fix(Context); Or my authentication class itself is a static class //in global.asax's begin_application Authentication.SomeState="blah"; Authentication.SomeOption=true; //etc //in global.asax's begin_request Authentication.Authenticate(); When are static or singleton classes bad to use? Am I doing it wrong, or am I just in a project that by definition has very little per-instance state associated with it? The only per-instance state I have is stored in HttpContext.Current.Items like so: /// <summary> /// The current user logged in for the HTTP request. If there is not a user logged in, this will be null. /// </summary> public static UserData CurrentUser{ get{ return HttpContext.Current.Items["fscauth_currentuser"] as UserData; //use HttpContext.Current as a little place to persist static data for this request } private set{ HttpContext.Current.Items["fscauth_currentuser"]=value; } }

    Read the article

  • Code Golf: Countdown Number Game

    - by Noldorin
    Challenge Here is the task, inspired by the well-known British TV game show Countdown. The challenge should be pretty clear even without any knowledge of the game, but feel free to ask for clarifications. And if you fancy seeing a clip of this game in action, check out this YouTube clip. It features the wonderful late Richard Whitely in 1997. You are given 6 numbers, chosen at random from the set {1, 2, 3, 4, 5, 6, 8, 9, 10, 25, 50, 75, 100}, and a random target number between 100 and 999. The aim is to make use the six given numbers and the four common arithmetic operations (addition, subtraction, multiplication, division; all over the rational numbers) to generate the target - or as close as possible either side. Each number may only be used once at most, while each arithmetic operator may be used any number of times (including zero.) Note that it does not matter how many numbers are used. Write a function that takes the target number and set of 6 numbers (can be represented as list/collection/array/sequence) and returns the solution in any standard numerical notation (e.g. infix, prefix, postfix). The function must always return the closest-possible result to the target, and must run in at most 1 minute on a standard PC. Note that in the case where more than one solution exists, any single solution is sufficient. Examples: {50, 100, 4, 2, 2, 4}, target 203 e.g. 100 * 2 + 2 + (4 / 4) e.g. (100 + 50) * 4 * 2 / (4 + 2) {25, 4, 9, 2, 3, 10}, target 465 e.g. (25 + 10 - 4) * (9 * 2 - 3) {9, 8, 10, 5, 9, 7), target 241 e.g. ((10 + 9) * 9 * 7) + 8) / 5 Rules Other than mentioned in the problem statement, there are no further restrictions. You may write the function in any standard language (standard I/O is not necessary). The aim as always is to solve the task with the smallest number of characters of code. Saying that, I may not simply accept the answer with the shortest code. I'll also be looking at elegance of the code and time complexity of the algorithm! My Solution I'm attempting an F# solution when I find the free time - will post it here when I have something! Format Please post all answers in the following format for the purpose of easy comparison: Language Number of characters: ??? Fully obfuscated function: (code here) Clear (ideally commented) function: (code here) Any notes on the algorithm/clever shortcuts it takes.

    Read the article

  • Advice Needed: Developers blocked by waiting on code to merge from another branch using GitFlow

    - by fogwolf
    Our team just made the switch from FogBugz & Kiln/Mercurial to Jira & Stash/Git. We are using the Git Flow model for branching, adding subtask branches off of feature branches (relating to Jira subtasks of Jira features). We are using Stash to assign a reviewer when we create a pull request to merge back into the parent branch (usually develop but for subtasks back into the feature branch). The problem we're finding is that even with the best planning and breakdown of feature cases, when multiple developers are working together on the same feature, say on the front-end and back-end, if they are working on interdependent code that is in separate branches one developer ends up blocking the other. We've tried pulling between each others' branches as we develop. We've also tried creating local integration branches each developer can pull from multiple branches to test the integration as they develop. Finally, and this seems to work possibly the best for us so far, though with a bit more overhead, we have tried creating an integration branch off of the feature branch right off the bat. When a subtask branch (off of the feature branch) is ready for a pull request and code review, we also manually merge those change sets into this feature integration branch. Then all interested developers are able to pull from that integration branch into other dependent subtask branches. This prevents anyone from waiting for any branch they are dependent upon to pass code review. I know this isn't necessarily a Git issue - it has to do with working on interdependent code in multiple branches, mixed with our own work process and culture. If we didn't have the strict code-review policy for develop (true integration branch) then developer 1 could merge to develop for developer 2 to pull from. Another complication is that we are also required to do some preliminary testing as part of the code review process before handing the feature off to QA.This means that even if front-end developer 1 is pulling directly from back-end developer 2's branch as they go, if back-end developer 2 finishes and his/her pull request is sitting in code review for a week, then front-end developer 2 technically can't create his pull request/code review because his/her code reviewer can't test because back-end developer 2's code hasn't been merged into develop yet. Bottom line is we're finding ourselves in a much more serial rather than parallel approach in these instance, depending on which route we go, and would like to find a process to use to avoid this. Last thing I'll mention is we realize by sharing code across branches that haven't been code reviewed and finalized yet we are in essence using the beta code of others. To a certain extent I don't think we can avoid that and are willing to accept that to a degree. Anyway, any ideas, input, etc... greatly appreciated. Thanks!

    Read the article

  • Fixing the #mvvmlight code snippets in Visual Studio 11

    - by Laurent Bugnion
    If you installed the latest MVVM Light version for Windows 8, you may encounter an issue where code snippets are not displayed correctly in the Intellisense popup. I am working on a fix, but for now here is how you can solve the issue manually. The code snippets MVVM Light, when installed correctly, will install a set of code snippets that are very useful to allow you to type less code. As I use to say, code is where bugs are, so you want to type as little of that as possible ;) With code snippets, you can easily auto-insert segments of code and easily replace the keywords where needed. For instance, every coder who uses MVVM as his favorite UI pattern for XAML based development is used to the INotifyPropertyChanged implementation, and how boring it can be to type these “observable properties”. Obviously a good fix would be something like an “Observable” attribute, but that is not supported in the language or the framework for the moment. Another fix involves “IL weaving”, which is a post-build operation modifying the generate IL code and inserting the “RaisePropertyChanged” instruction. I admire the invention of those who developed that, but it feels a bit too much like magic to me. I prefer more “down to earth” solutions, and thus I use the code snippets. Fixing the issue Normally, you should see the code snippets in Intellisense when you position your cursor in a C# file and type mvvm. All MVVM Light snippets start with these 4 letters. Normal MVVM Light code snippets However, in Windows 8 CP, there is an issue that prevents them to appear correctly, so you won’t see them in the Intellisense windows. To restore that, follow the steps: In Visual Studio 11, open the menu Tools, Code Snippets Manager. In the combobox, select Visual C#. Press Add… Navigate to C:\Program Files (x86)\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\SnippetsWin8 and select the CSharp folder. Press Select Folder. Press OK to close the Code Snippets Manager. Now if you type mvvm in a C# file, you should see the snippets in your Intellisense window. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • What source code organization approach helps improve modularity and API/Implementation separation?

    - by Berin Loritsch
    Few languages are as restrictive as Java with file naming standards and project structure. In that language, the file name must match the public class declared in the file, and the file must live in a directory structure matching the class package. I have mixed feelings about that approach. While I never have to guess where a file lives, there's still a lot of empty directories and artificial constraints. There's several languages that define everything about a class in one file, at least by convention. C#, Python (I think), Ruby, Erlang, etc. The commonality in most these languages is that they are object oriented, although that statement can probably be rebuffed (there is one non-OO language in the list already). Finally, there's quite a few languages mostly in the C family that have a separate header and implementation file. For C I think this makes sense, because it is one of the few ways to separate the API interface from implementations. With C it seems that feature is used to promote modularity. Yet, with C++ the way header and implementation files are split seems rather forced. You don't get the same clean API separation that you do with C, and you are forced to include some private details in the header you would rather keep only in the implementation. There's quite a few languages that have a concept that overlaps with interfaces like Java, C#, Go, etc. Some languages use what feels like a hack to provide the same concept like C# using pure virtual abstract classes. Still others don't really have an interface concept and rely on "duck" typing--for example Ruby. Ruby has modules, but those are more along the lines of mixing in behaviors to a class than they are for defining how to interact with a class. In OO terms, interfaces are a powerful way to provide separation between an API client and an API implementation. So to hurry up and ask the question, from a personal experience point of view: Does separation of header and implementation help you write more modular code, or does it get in the way? (it helps to specify the language you are referring to) Does the strict file name to class name scheme of Java help maintainability, or is it unnecessary structure for structure's sake? What would you propose to promote good API/Implementation separation and project maintenance, how would you prefer to do it?

    Read the article

  • Code Golf: Collatz Conjecture

    - by Earlz
    Inspired by http://xkcd.com/710/ here is a code golf for it. The Challenge Given a positive integer greater than 0, print out the hailstone sequence for that number. The Hailstone Sequence See Wikipedia for more detail.. If the number is even, divide it by two. If the number is odd, triple it and add one. Repeat this with the number produced until it reaches 1. (if it continues after 1, it will go in an infinite loop of 1 -> 4 -> 2 -> 1...) Sometimes code is the best way to explain, so here is some from Wikipedia function collatz(n) show n if n > 1 if n is odd call collatz(3n + 1) else call collatz(n / 2) This code works, but I am adding on an extra challenge. The program must not be vulnerable to stack overflows. So it must either use iteration or tail recursion. Also, bonus points for if it can calculate big numbers and the language does not already have it implemented. (or if you reimplement big number support using fixed-length integers) Test case Number: 21 Results: 21 -> 64 -> 32 -> 16 -> 8 -> 4 -> 2 -> 1 Number: 3 Results: 3 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1 Also, the code golf must include full user input and output.

    Read the article

  • C# and F# lambda expressions code generation

    - by ControlFlow
    Let's look at the code, generated by F# for simple function: let map_add valueToAdd xs = xs |> Seq.map (fun x -> x + valueToAdd) The generated code for lambda expression (instance of F# functional value) will looks like this: [Serializable] internal class map_add@3 : FSharpFunc<int, int> { public int valueToAdd; internal map_add@3(int valueToAdd) { this.valueToAdd = valueToAdd; } public override int Invoke(int x) { return (x + this.valueToAdd); } } And look at nearly the same C# code: using System.Collections.Generic; using System.Linq; static class Program { static IEnumerable<int> SelectAdd(IEnumerable<int> source, int valueToAdd) { return source.Select(x => x + valueToAdd); } } And the generated code for the C# lambda expression: [CompilerGenerated] private sealed class <>c__DisplayClass1 { public int valueToAdd; public int <SelectAdd>b__0(int x) { return (x + this.valueToAdd); } } So I have some questions: Why does F#-generated class is not marked as sealed? Why does F#-generated class contains public fields since F# doesn't allows mutable closures? Why does F# generated class has the constructor? It may be perfectly initialized with the public fields... Why does C#-generated class is not marked as [Serializable]? Also classes generated for F# sequence expressions are also became [Serializable] and classes for C# iterators are not.

    Read the article

  • Posting source code in blogger- fails with C# containers

    - by Lirik
    I tried the solutions that are posted in this related SO question and for the most part the code snippets are working, but there are some cases that are still getting garbled by Blogger when it publishes the blog. In particular, declaring generic containers seems to be most troublesome. Please see the code examples on my blog and in particular the section where I define the dictionary (http://mlai-lirik.blogspot.com/). I want to display this: static Dictionary<int, List<Delegate>> _delegate = new Dictionary<int,List<Delegate>>(); But blogger publishes this: static Dictionary<int, list=""><delegate>> _delegate = new Dictionary<int, list=""><delegate>>(); And it caps the end of my code section with this: </delegate></delegate></int,></delegate></int,> Apparently blogger thinks that the <int and <delegate> portion of the dictionary are some sort of HTML tags and it automatically attempts to close them at the end of the code snippet. Does anybody know how to get around this problem?

    Read the article

  • Yet another "What is this code doing"-type of Perl code

    - by Mike
    I have inherited some code from a guy whose favorite past time was to shorten every line to its absolute minimum (and sometimes only to make it look cool). His code is hard to understand but I managed to understand (and rewrite) most of it. Now I have stumbled on a piece of code which, no matter how hard I try, I cannot understand. my @heads = grep {s/\.txt$//} OSA::Fast::IO::Ls->ls($SysKey,'fo','osr/tiparlo',qr{^\d+\.txt$}) || (); my @selected_heads = (); for my $i (0..1) { $selected_heads[$i] = int rand scalar @heads; for my $j (0..@heads-1) { last if (!grep $j eq $_, @selected_heads[0..$i-1]); $selected_heads[$i] = ($selected_heads[$i] + 1) % @heads; #WTF? } my $head_nr = sprintf "%04d", $i; OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].txt","$recdir/heads/$head_nr.txt"); OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].cache","$recdir/heads/$head_nr.cache"); } From what I can understand, this is supposed to be some kind of randomizer, but I never saw a more complex way to achieve randomness. Or are my assumptions wrong? At least, that's what this code is supposed to do. Select 2 random files and copy them. === NOTES === The OSA Framework is a Framework of our own. They are named after their UNIX counterparts and do some basic testing so that the application does not need to bother with that.

    Read the article

  • Code Organization Connundrum: Web Project With Multiple Supporting DLLs?

    - by Code Sherpa
    Hi. I am trying to get a handle on the best practice for code organization within my project. I have looked around on the internet for good examples and, so far, I have seen examples of a web project with one or multiple supporting class libraries that it references or a web project with sub-folders that follow its namespace conventions. Assuming there is no right answer, this is what I currently have for code organization: MyProjectWeb This is my web site. I am referencing my class libraries here. MyProject.DLL As the base namespace, I am using this DLL for files that need to be generally consumable. For example, my class "Enums" that has all the enumerations in my project lives there. As does class MyProjectException for all exception handling. MyProject.IO.DLL This is a grouping of maybe 20 files that handle file upload and download (so far). MyProject.Utilities.DLL ALl my common classes and methods bunched up together in one generally consumable DLL. Each class follows a "XHelper" convention such as "SqlHelper, AuthHelper, SerializationHelper, and so on... MyProject.Web.DLL I am using this DLL as the main client interface. Right now, the majority of class files here are: 1) properties (such as School, Location, Account, Posts) 2) authorization stuff ( such as custom membership, custom role, & custom profile providers) My question is simply - does this seem logical? Also, how do I avoid having to cross reference DLLs from one project library to the next? For example, MyProject.Web.DLL uses code from MyProject.Utilities.DLL and MyProject.Utilities.DLL uses code from MyProject.DLL. Is this solved by clicking on properties and selecting "Dependencies"? I tried that but still don't seem to be accessing the namespaces of the assembly I have selected. Do I have to reference every assembly I need for each class library? Responses appreciated and thanks for your patience.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >