Search Results

Search found 87933 results on 3518 pages for 'the code pimp'.

Page 69/3518 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Google analytics iframe code measuring visitor as two visitors

    - by Maarten
    I'm trying to measure visitors in an iframe and the site containing the iframe. What I would like is that visitors clicks in the iframe are seen being from the same visitor as the containing site, but somehow it is seen as two seperate visitors. I followed examples from http://www.blastam.com/blog/index.php/2011/02/google-analytics-cross-domain-tracking/, trimmed down to an even simpler version based on the comments about setDomainName not being needed anymore but with setDomainName I get the same result: a click on a page and a click on the iframe is seen as 2 clicks by 2 seperate visitors. This is the code in my iframe if (_gaq && gaAccount.length > 0){ _gaq.push(['_setAccount', gaAccount]); _gaq.push(['_setAllowLinker', true]); //_gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview', 'mytestcountername']); } And this is the code in the containing page: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-9605474-4']); _gaq.push(['_setAllowLinker', true]); //_gaq.push(['_setDomainName', '.domain.nl']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Unit and Integration testing: How can it become a reflex

    - by LordOfThePigs
    All the programmers in my team are familiar with unit testing and integration testing. We have all worked with it. We have all written tests with it. Some of us even have felt an improved sense of trust in his/her own code. However, for some reason, writing unit/integration tests has not become a reflex for any of the members of the team. None of us actually feel bad when not writing unit tests at the same time as the actual code. As a result, our codebase is mostly uncovered by unit tests, and projects enter production untested. The problem with that, of course is that once your projects are in production and are already working well, it is virtually impossible to obtain time and/or budget to add unit/integration testing. The members of my team and myself are already familiar with the value of unit testing (1, 2) but it doesn't seem to help bringing unit testing into our natural workflow. In my experience making unit tests and/or a target coverage mandatory just results in poor quality tests and slows down team members simply because there is no self-generated motivation to produce these tests. Also as soon as pressure eases, unit tests are not written any more. My question is the following: Is there any methods that you have experimented with that helps build a dynamic/momentum inside the team, leading to people naturally wanting to create and maintain those tests?

    Read the article

  • Caching strategies for entities and collections

    - by Rob West
    We currently have an application framework in which we automatically cache both entities and collections of entities at the business layer (using .NET cache). So the method GetWidget(int id) checks the cache using a key GetWidget_Id_{0} before hitting the database, and the method GetWidgetsByStatusId(int statusId) checks the cache using GetWidgets_Collections_ByStatusId_{0}. If the objects are not in the cache they are retrieved from the database and added to the cache. This approach is obviously quick for read scenarios, and as a blanket approach is quick for us to implement, but requires large numbers of cache keys to be purged when CRUD operations are carried out on entities. Obviously as additional methods are added this impacts performance and the benefits of caching diminish. I'm interested in alternative approaches to handling caching of collections. I know that NHibernate caches a list of the identifiers in the collection rather than the actual entities. Is this an approach other people have tried - what are the pros and cons? In particular I am looking for options that optimise performance and can be implemented automatically through boilerplate generated code (we have our own code generation tool). I know some people will say that caching needs to be done by hand each time to meet the needs of the specific situation but I am looking for something that will get us most of the way automatically.

    Read the article

  • a little code to allow word substitution depending on user

    - by Fred Quimby
    Can anyone help? I'm creating a demo web app in html in order for people to physically see and comment on the app prior to committing to a proper build. So whilst the proper app will be database driven, my demo is just standard html with some javascript effects. What I do want to demonstrate is that different user group will see different words. For example, imagine I have an html sentence that says 'This will cost £100 to begin'. What I need to some way of identifying that if the user has deemed themselves to be from the US, the sentence says 'This will cost $100 to begin'. This requirement is peppered throughtout the pages but I'm happy to add each one manually. So I envisage some code along the lines of 'first, remove the [boot US] trunk' where the UK version is 'first remove the boot' but the code is saying that the visitor needs the US version. It then looks up boot (in an Access database perhaps) and sees that the table says for boot for US, display 'trunk'. I'm not a programmer but I can normally cobble together scripts so I'm hoping someone may have a relatively easy solution in javascrip, CSS or asp. To recap; I have a number of words or short sentences that need to appear differently and I'm happy to manually insert each one if necessary (but would be even better if the words were automatically changed). And I need a device which allows me to tell the pages to choose the US version, or for example, the New Zealand version. Thanks in advance. Fred

    Read the article

  • Client-side code and UPK

    - by [email protected]
    There is a long running discussion in UPK Development regarding the use of any client side code as part of the end-user playback. By this I mean anything which requires an install including ActiveX controls, browser helper objects, stand-alone applications, things that run in the task bar, etc. We all have grown to love zero-footprint applications over the past ten years, but there are some things which are not technically feasible using HTML alone. One example of this is the functionality provided by our SmartHelp in-application support component. This allows the user to launch context sensitive help without making modifications to the target application. (If you are unfamiliar with SmartHelp, more information can be found in the "In-Application Support Guide" in the UPK manual directory) We always try to implement everything we can using only HTML but there are many features which have been requested over the years that would require some client-side code in order to work. When these come up for discussion, there is always a spirited debate about the acceptability of a client side solution. I thought it would be interesting to ask for feedback from a wider audience. What do you think about client-side components? Would your organization consider them? Do you already deploy SmartHelp? Is there a large hurdle to clear for these to be worth the deployment costs? Let us know. Mark Overton, VP Development for UPK

    Read the article

  • How to "translate" interdependent object states in code?

    - by Earl Grey
    I have the following problem. My UI interace contains several buttons, labels, and other visual information. I am able to describe every possible workflow scenario that should be be allowed on that UI. That means I can describe it like this - when button A is pressed, the following should follow - In the case of that A button, there are three independent factors that influence the possible result when pushing the A button. The state of the session (blank, single, multi, multi special), the actual work that is being done by the system at the moment of pressing the A button (nothing was happening, work was being done, work was paused) and a separate UI element that has two states (on , off)..This gives me a 3 dimensional cube with 24 possible outcomes. I could write code for this using if cycles, switch cycles etc....but the problem is, I have another 7 buttons on that ui, I can enter this UI from different states..some buttons change the state, some change parameters... To sum up, the combinations are mindbogling and I am not able come up with a methodology that scales and is systematically reliable. I am able to describe EVERY workflow with words, I am sure my description is complete and without logical errors. But I am not able to translate that into code. I was trying to draw flowcharts but it soon became visually too complicated due to too many if "emafors". Can you advice how to proceeed?

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • The best way to structure/design game code

    - by Edward
    My question is quite broad and related to the 2D game code design/architecture/structure. Usually the main game consists of the main loop where you update & render your world states. However, it's recommended for many purposes to separate rendering from the game-logic and so on. I am kinda confused about the whole situation. Many game engines/libs/sdks don't follow separation schema. They propagate a way where you define some scenes/stages and they contain some objects and the scene/stage controls the user input and so on. For example, in cocos2d(-x) and libgdx (stage2d) the games are usually done the way that the update logic happens at the same time/place as rendering. Also, the propagated way is to have a structure where an object knows how to draw itself - which is not a separation of updating & rendering. The same with Flash based games, they are usually done the way when an object (class) contains a swf or a texture and some data and holds some update logic itself, or updated from main Scene. And again this object already knows how to draw itself via "addChild". Also, some people recommend to use MVC pattern, which will require to completely obey the structure of those engines/libs/sdks. Maybe I am overthinking everything, but I am totally confused. I would be grateful if somebody could point me to a correct direction with the game code structures. What is your way of doing things in libgdx/cocos2d/flash?

    Read the article

  • Euler Problem 1 : Code Optimization / Alternatives [on hold]

    - by Sudhakar
    I am new bee into the world of Datastructures and algorithms from ground up. This is my attempt to learn. If the question is very plain/simple . Please bear with me. Problem: Find the sum of all the multiples of 3 or 5 below 1000. Code i worte: package problem1; public class Problem1 { public static void main(String[] args) { //******************Approach 1**************** long start = System.currentTimeMillis(); int total = 0; int toSubtract = 0; //Complexity N/3 int limit = 10000; for(int i=3 ; i<limit ;i=i+3){ total = total +i; } //Complexity N/5 for(int i=5 ; i<limit ;i=i+5){ total = total +i; } //Complexity N/15 for(int i=15 ; i<limit ;i=i+15){ toSubtract = toSubtract +i; } //9N/15 = 0.6 N System.out.println(total-toSubtract); System.out.println("Completed in "+(System.currentTimeMillis() - start)); //******************Approach 2**************** for(int i=3 ; i<limit ;i=i+3){ total = total +i; } for(int i=5 ; i<limit ;i=i+5){ if ( 0 != (i%3)) total = total +i; } } } Question 1 - Which best approach from the above code and why ? 2 - Are there any better alternatives ?

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • Handy Tool for Code Cleanup: Automated Class Element Reordering

    - by Geertjan
    You're working on an application and this thought occurs to you: "Wouldn't it be cool if I could define rules specifying that all static members, initializers, and fields should always be at the top of the class? And then, whenever I wanted to, I'd start off a process that would actually do the reordering for me, moving class elements around, based on the rules I had defined, automatically, across one or more classes or packages or even complete code bases, all at the same time?" Well, here you go: That's where you can set rules for the ordering of your class members. A new hint (i.e., new in NetBeans IDE 7.3), which you need to enable yourself because by default it is disabled, let's the IDE show a hint in the Java Editor whenever there's code that isn't ordered according to the rules you defined: The first element in a file that the Java Editor identifies as not matching your rules gets a lightbulb hint shown in the left sidebar: Then, when you click the lightbulb, automatically the file is reordered according to your defined rules. However, it's not much fun going through each file individually to fix class elements as shown above. For that reason, you can go to "Refactor | Inspect and Transform". There, in the "Inspect and Transform" dialog, you can choose the hint shown above and then specify that you'd like it to be applied to a scope of your choice, which could be a file, a package, a project, combinations of these, or all of the open projects, as shown below: Then, when Inspect is clicked, the Refactoring window shows all the members that are ordered in ways that don't conform to your rules: Click "Do Refactoring" above and, in one fell swoop, all the class elements within the selected scope are ordered according to your rules.

    Read the article

  • INETA NorAm Component Code Challenge

    - by Chris Williams
    Want to win a trip to TechEd 2011? INETA NorAm is hosting a contest with our partners to see who can build an .NET application making effective use of reusable components to solve a problem. The Rules: Any .NET Application (WinForms, ASP.NET, WPF, Silverlight, Windows Phone 7, etc.) built in the last year (since 1/1/2010) using at least 1 component from at least 1 approved vendor. Then make a 3 - 5 minute Camtasia video showing your entry and describing what component(s) you used and why your application is cool. Our judges will review the submissions and the best two will win a scholarship to Tech·Ed 2011, May 16-19 in Atlanta GA including airfare, hotel, and conference pass. The Judging: Entries will be judged on four criteria: Effective use of a component to solve a problem/display data Innovative use of components Impact using components (i.e. reduction in lines of code written, increased productivity, etc.) Most creative use of a component. Timeline: Hurry! The submission deadline is March 15, 2011 at Midnight Eastern Standard Time. More information can be found on the INETA Component Code Challenge page: http://ineta.org/CodeChallenge/Default.aspx

    Read the article

  • Is 'Protection' an acceptable Java class name

    - by jonny
    This comes from a closed thread at stack overflow, where there are already some useful answers, though a commenter suggested I post here. I hope this is ok! I'm trying my best to write good readable, code, but often have doubts in my work! I'm creating some code to check the status of some protected software, and have created a class which has methods to check whether the software in use is licensed (there is a separate Licensing class). I've named the class 'Protection', which is currently accessed, via the creation of an appProtect object. The methods in the class allow to check a number of things about the application, in order to confirm that it is in fact licensed for use. Is 'Protection' an acceptable name for such a class? I read somewhere that if you have to think to long in names of methods, classes, objects etc, then perhaps you may not be coding in an Object Oriented way. I've spent a lot of time thinking about this before making this post, which has lead me to doubt the suitability of the name! In creating (and proof reading) this post, I'm starting to seriously doubt my work so far. I'm also thinking I should probably rename the object to applicationProtection rather than appProtect (though am open to any comments on this too?). I'm posting non the less, in the hope that I'll learn something from others views/opinions, even if they're simply confirming I've "done it wrong"!

    Read the article

  • Suggested HTTP REST status code for 'request limit reached'

    - by Andras Zoltan
    I'm putting together a spec for a REST service, part of which will incorporate the ability to throttle users service-wide and on groups of, or on individual, resources. Equally, time-outs for these would be configurable per resource/group/service. I'm just looking through the HTTP 1.1 spec and trying to decide how I will communicate to a client that a request will not be fulfilled because they've reached their limit. Initially I figured that client code 403 - Forbidden was the one, but this, from the spec: Authorization will not help and the request SHOULD NOT be repeated bothered me. It actually appears that 503 - Service Unavailable is a better one to use - since it allows for the communication of a retry time through the use of the Retry-After header. It's possible that in the future I might look to support 'purchasing' more requests via eCommerce (in which case it would be nice if client code 402 - Payment Required had been finalized!) - but I figure that this could equally be squeezed into a 503 response too. Which do you think I should use? Or is there another I've not considered?

    Read the article

  • What exactly does the condition in the MIT license imply?

    - by Yannbane
    To quote the license itself: Copyright (C) [year] [copyright holders] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. I am not exactly sure what the bold part implies. Lets say that I'm creating some library, and I license it under the MIT license. Someone decides to fork that library and to create a closed-source, commercial version. According to the license, he should be free to do that. However, what does he additionally need to do under those terms? Credit me as the creator? I guess the "above copyright notice" refers to the "Copyright (C) [..." part, but, wouldn't that list me as the author of his code (although I technically typed out the code)? And wouldn't including the "permission notice" in what is now his library practically license it under the same conditions that I licensed my own library in? Or, am I interpreting this incorrectly? Does that refer to my obligations to include the copyright and the permission notice?

    Read the article

  • Sharing business logic between server-side and client-side of web application?

    - by thoughtpunch
    Quick question concerning shared code/logic in back and front ends of a web application. I have a web application (Rails + heavy JS) that parses metadata from HTML pages fetched via a user supplied URL (think Pinterest or Instapaper). Currently this processing takes place exclusively on the client-side. The code that fetches the URL and parses the DOM is in a fairly large set of JS scripts in our Rails app. Occasionally want to do this processing on the server-side of the app. For example, what if a user supplied a URL but they have JS disabled or have a non-standard compliant browser, etc. Ideally I'd like to be able to process these URLS in Ruby on the back-end (in asynchronous background jobs perhaps) using the same logic that our JS parsers use WITHOUT porting the JS to Ruby. I've looked at systems that allow you to execute JS scripts in the backend like execjs as well as Ruby-to-Javascript compilers like OpalRB that would hopefully allow "write-once, execute many", but I'm not sure that either is the right decision. Whats the best way to avoid business logic duplication for apps that need to do both client-side and server-side processing of similar data?

    Read the article

  • Is there a Source Insight alternative?

    - by hansioux
    I am not a developer, but for my work I trace a lot of codes. It is actually rather difficult reading other people's code, especially for bigger projects. Source Insight is a great application that stores all the symbols in a data base, so you can see a new function being called, click on it and see how the function is written. You can see all the referrer of a object or jump to a caller. You don't need to break the train of thought and think up shell commands just to find these things every time you ran into a new variable/structure/function from some other files. I have it running on WINE, but there are little glitches that sometimes gets in the way. I know people will mention C-scope, I've tried it, but it really isn't the same. So, with so many huge open source projects out there for Ubuntu, are there native tools to help read them efficiently? EDIT: Thanks for the suggestions, but does CODE::BLOCKS or CodeLite provide abilities to see the function that the mouse clicked on without jumping to it, so I can see the caller and callee at the same time?

    Read the article

  • Is individual code ownership important?

    - by Jim Puls
    I'm in the midst of an argument with some coworkers over whether team ownership of the entire codebase is better than individual ownership of components of it. I'm a huge proponent of assigning every member of the team a roughly equal share of the codebase. It lets people take pride in their creation, gives the bug screeners an obvious first place to assign incoming tickets, and helps to alleviate "broken window syndrome". It also concentrates knowledge of specific functionality with one (or two) team members making bug fixes much easier. Most of all, it puts the final say on major decisions with one person who has a lot of input instead of with a committee. I'm not advocating for requiring permission if somebody else wants to change your code; maybe have the code review always be to the owner, sure. Nor am I suggesting building knowledge silos: there should be nothing exclusive about this ownership. But when suggesting this to my coworkers, I got a ton of pushback, certainly much more than I expected. So I ask the community: what are your opinions on working with a team on a large codebase? Is there something I'm missing about vigilantly maintaining collective ownership?

    Read the article

  • DDDNorth2 Bradford, 13th October 2012 - Async Patterns presentation and source code

    - by Liam Westley
    Many thanks to Andy Westgarth and his team for organising a fantastic conference at the rather elegant Bradford University School of Management. Also, a big congratulations to all the delegates who gave up there free time to come and hear us speak and who were, in general, enthusiastic and asked some cracking questions to keep us speakers on our toes. For those who attended my Async my source code and presentation are now available on GitHub, https://github.com/westleyl/DDDNorth2-AsyncPatterns If you are new to Git then the easiest client to install is GitHub for Windows, a graphical UI for accessing GitHub. Personally, I also have TortoiseGit installed – the file explorer add-in that works in a familiar manner to TortoiseSVN. As I mentioned during the presentation I have not included the sample data, the music files, in the source code placed on GitHub but I have included instructions on how to download them from http://silents.bandcamp.com and place them in the correct folders. What I forgot to mention is that Windows Media Player by default does not play Ogg Vorbis and Flac music files, however you can download the codec installer for these, for free, from http://xiph.org/dshow. I am planning to break down this little project into a series of blog posts, with each pattern being a single blog post over several weeks. In these I will flesh out the background behind the pattern, the basic goal being achieved and how to monitor the progress of the sample data being processed. Basically, what I said during the presentation and is missing from the slides.

    Read the article

  • Retrofit WebForms with ASP.NET MVC - NoVa Code Camp 2010.2 Demo

    - by Soe Tun
    Thank you to everyone who attended my Retrofit WebForms with ASP.NET MVC session at NoVa Code 2010.2. It was a fun event for me and I hope you had a great time and learned something from it. I wish I had more time to go over some more important topics in more detail. I *promise* I will be writing blog post series about it since I'll have some vacation time during the December holidays to cover some topics that I didn't get to cover in detail.   Please note that the ".bak" file included in the zip file is a SQL Server Database backup file. You have to restore it on your Database server to run it with the source code demo.   Please feel free to ask me about the demo project through Twitter or from this blog post. I'll be glad to help you out. If you want me to give this presentation at your .NET User Group, please let me know and I'll be honored to speak there also.   Again, thank you all and have a great holiday season. Here is the download link to my Demo project Zip file with the PowerPoint presentation in it. Please let me know if the link doesn't work.

    Read the article

  • Effective handling of variables in non-object oriented programming

    - by srnka
    What is the best method to use and share variables between functions in non object-oriented program languages? Let's say that I use 10 parameters from DB, ID and 9 other values linked to it. I need to work with all 10 parameters in many functions. I can do it next ways: 1. call functions only with using ID and in every function get the other parameters from DB. Advantage: local variables are clear visible, there is only one input parameter to function Disadvantage: it's slow and there are the same rows for getting parameters in every function, which makes function longer and not so clear 2. call functions with all 10 parameters Advantage: working with local variables, clear function code Disadvantage: many input parameters, what is not nice 3. getting parameters as global variables once and using them everywhere Advantage - clearer code, shorter functions, faster processing Disadvantage - global variables - loosing control of them, possibility of unwanted overwriting (Especially when some functions should change their values) Maybe there is some another way how to implement this and make program cleaner and more effective. Can you say which way is the best for solving this issue?

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • Source code not matching uploaded HTML file

    - by benhowdle89
    I'm not sure if this is the right place to ask but i'm having a hugely frustrating problem with Coda and my website (i'm not sure which one is causing the issue) I'm using Coda to make changes to my website, Coda uses built in FTP to save changes to your web page. So when you hit Save, it uploads the new file. I've been using Coda for months and never had a problem until now. I am making changes in the html of my index.php and hitting save, it's successfully uploading the file but no changes are reflected in the source code in ANY browser. I even logged into cPanel on my website, ie. www.example.com:2082 and looked at the file - the changes have been made successfully. But the actual webpage in browser's source code, no changes?? I have tried adding which made no difference. Interestingly i make changes to style.css and the changes are instant. I have emptied the cache on all of my browsers but i'm still having an issue. Does this sound like a Coda problem or has anyone heard of such a thing?

    Read the article

  • Is individual code ownership important?

    - by Jim Puls
    I'm in the midst of an argument with some coworkers over whether team ownership of the entire codebase is better than individual ownership of components of it. I'm a huge proponent of assigning every member of the team a roughly equal share of the codebase. It lets people take pride in their creation, gives the bug screeners an obvious first place to assign incoming tickets, and helps to alleviate "broken window syndrome". It also concentrates knowledge of specific functionality with one (or two) team members making bug fixes much easier. Most of all, it puts the final say on major decisions with one person who has a lot of input instead of with a committee. I'm not advocating for requiring permission if somebody else wants to change your code; maybe have the code review always be to the owner, sure. Nor am I suggesting building knowledge silos: there should be nothing exclusive about this ownership. But when suggesting this to my coworkers, I got a ton of pushback, certainly much more than I expected. So I ask the community: what are your opinions on working with a team on a large codebase? Is there something I'm missing about vigilantly maintaining collective ownership?

    Read the article

  • What did Rich Hickey mean when he said, "All that specificity [of interfaces/classes/types] kills your reuse!"

    - by GlenPeterson
    In Rich Hickey's thought-provoking goto conference keynote "The Value of Values" at 29 minutes he's talking about the overhead of a language like Java and makes a statement like, "All those interfaces kill your reuse." What does he mean? Is that true? In my search for answers, I have run across: The Principle of Least Knowledge AKA The Law of Demeter which encourages airtight API interfaces. Wikipedia also lists some disadvantages. Kevlin Henney's Imperial Clothing Crisis which argues that use, not reuse is the appropriate goal. Jack Diederich's "Stop Writing Classes" talk which argues against over-engineering in general. Clearly, anything written badly enough will be useless. But how would the interface of a well-written API prevent that code from being used? There are examples throughout history of something made for one purpose being used more for something else. But in the software world, if you use something for a purpose it wasn't intended for, it usually breaks. I'm looking for one good example of a good interface preventing a legitimate but unintended use of some code. Does that exist? I can't picture it.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >