Search Results

Search found 4617 results on 185 pages for 'coding crow'.

Page 14/185 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • Should I use title case in URLs?

    - by Amadiere
    We are currently deciding on a consistent naming convention across a site with multiple web applications. Historically, I've been an advocate of the 'lowercase all the letters!' when creating URLs: http://example.com/mysystem/account/view/1551 However, within the last year or two, specifically since I began using ASP.NET MVC & had more dealings with REST based URLs, I've become a fan of capitalizing the first letter of each section/word within the URL as it makes it easier to read (imho). http://example.com/MySystem/Account/View/1551 We're not in a situation where people need to read or be able to understand the URLs, so that's not a driver per se. The main thing we are after is a consistent approach that is rational and makes sense. Are there any standards that declare it good to do one way or another, or issues that we may run into on (at least realistically modern) setups that would choose a preference over another? What is the general consensus for this debate currently?

    Read the article

  • What is a generic term for name/identifier? (as opposed to label)

    - by d3vid
    I need to refer to a number of things that have both an identifier value (used in code and configuration), and a human-readable label. These things include: database columns dropdown items subapplications objects stored in a dictionary I want two unambiguous terms. One to refer to the identifier/value/key. One to refer to the label. As you can see, I'm pretty settled on the latter :) For the former, identifier seems best (not everything is strictly a key, and value and name could refer to the label; although, identifier usually refers only to a variable name), but I would prefer to follow an established practice if there is one. Is there an established term for this? (Please provide a source.) If not, are there any examples of a choice from a significant source (Java APIs, MSDN, a big FLOSS project)? (I wasn't sure if this should be posted here or to English Language & Usage. I thought this was the more appropriate expert audience. Happy to migrate if not.)

    Read the article

  • CSS naming guildlines with elements with multiple classes

    - by ryanzec
    Its seems like there are 2 ways someone can handle naming classes for elements that are designed to have multiple classes. One way would be: <span class="btn btn-success"></span> This is something that twitter bootstrap uses. Another possibility I would think would be: <span class="btn success"></span> It seems like the zurb foundation uses this method. Now the benefits of the first that I can see is that there less chance of outside css interfering with styling as the class name btn-success would not be as common as the class name success. The benefit of the second as I can see is that there is less typing and potential better style reuse. Are there any other benefits/disadvantages of either option and is one of them more popular than the other?

    Read the article

  • Is there something better than a StringBuilder for big blocks of SQL in the code

    - by Eduardo Molteni
    I'm just tired of making a big SQL statement, test it, and then paste the SQL into the code and adding all the sqlstmt.append(" at the beginning and the ") at the end. It's 2011, isn't there a better way the handle a big chunk of strings inside code? Please: don't suggest stored procedures or ORMs. edit Found the answer using XML literals and CData. Thanks to all the people that actually tried to answer the question without questioning me for not using ORM, SPs and using VB edit 2 the question leave me thinking that languages could try to make a better effort for using inline SQL with color syntax, etc. It will be cheaper that developing Linq2SQL. Just something like: dim sql = <sql> SELECT * ... </sql>

    Read the article

  • Are SQL Injection vulnerabilities in a PHP application acceptable if mod_security is enabled?

    - by Austin Smith
    I've been asked to audit a PHP application. No framework, no router, no model. Pure PHP. Few shared functions. HTML, CSS, and JS all mixed together. I've discovered numerous places where SQL injection would be easily possible. There are other problems with the application (XSS vulnerabilities, rampant inline CSS, code copy-pasted everywhere) but this is the biggest. Sometimes they escape inputs, not using a prepared query or even mysql_real_escape_string(), mind you, but using addslashes(). Often, though, their queries look exactly like this (pasted from their code but with columns and variable names changed): $user = mysql_query("select * from profile where profile_id='".$_REQUEST["profile_id"]."'"); The developers in question claimed that they were unable to hack their application. I tried, and found mod_security to be enabled, resulting in HTTP 406 for some obvious SQL injection attacks. I believe there to be sophisticated workarounds for mod_security, but I don't have time to chase them down. They claim that this is a "conceptual" matter and not a "practical" one since the application can't easily be hacked. Their internal auditor agreed that there were problems, but emphasized the conceptual nature of the issues. They also use this conceptual/practical argument to defend against inline CSS and JS, absence of code organization, XSS vulnerabilities, and massive amounts of repetition. My client (rightly so, perhaps) just wants this to go away so they can launch their product. The site works. You can log in, do what you need to do, and things are visibly functional, if slow. SQL Injection would indeed be hard to do, given mod_security. Further, their talk of "conceptual vs. practical" is rhetorically brilliant, considering that my client doesn't understand web application security. I worry that they've succeeded in making me sound like an angry puritan. In many ways, this is a problem of politics, not technology, but I am at a loss. As a developer, I want to tell them to toss the whole project and start over with a new team, but I face a strong defense from the team that built it and a client who really needs to ship their product. Is my position here too harsh? Even if they fix the SQL Injection and XSS problems can I ever endorse the release of an unmaintainable tangle of spaghetti code?

    Read the article

  • Standards for how developers work on their own workstations

    - by Jon Hopkins
    We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was "current" (he was prototyping some bits on versions of the project other than his "core" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works on their own machine to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.]

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • Approach on Software Development Architecture

    - by john ryan
    Hi i am planning to standardize our way of creating project for our new projects. Currently we are using 3tier architecture where we have our ClassLibrary Project where it includes our Data Access Layer and Business Layer Something like: Solution ClassLibrary >ClassLibrary Project : >DAL(folder) > DAL Classes >BAL(folder) > BAL Classes And this Class Library dll was reference on our presentation Layer Project which are the Application(web/desktop) Something like: Solution WebUniversitySystem >Libraries(folder) > ClassLibrary.dll >WebUniversitySystem(Project): >Reference ClassLibrary.dll >Pages etc... Now i am planning to do is something like: Solution WebUniversitySystem >DataAccess(Project) >BusinesLayer(Project) >Reference DAL >WebUniversitySystem(Project): >Reference BAL >Pages etc... Is this Ok ? Or there is a good Approach that we can follow? Thanks In Regards

    Read the article

  • Pythonic use of the isinstance function?

    - by Pace
    Whenever I find myself wanting to use the isinstance() function I usually know that I'm doing something wrong and end up changing my ways. However, in this case I think I have a valid use for it. I will use shapes to illustrate my point although I am not actually working with shapes. I am parsing XML configuration files that look like the following: <square> <width>7</width> </square> <rectangle> <width>5</width> <height>7</height> </rectangle> <circle> <radius>4</radius> </circle> For each element I create an instance of the Shape class and build up a list of Shape objects in a class called the ShapeContainer. Different parts of the rest of my application need to refer to the ShapeContainer to get certain shapes. Depending on what the code is doing it might need just rectangles, or it might operate on all quadrangles, or it might operate on all shapes. I have created the following function in the ShapeContainer class (the actual function uses a list comprehension but I have expanded it here for readability): def locate(self, shapeClass): result = [] for shape in self.__shapes: if isinstance(shape,shapeClass): result.append(shape) return result Is this a valid use of the isinstance function? Is there another way I can do this which might be more pythonic?

    Read the article

  • Why Use !boolean_variable Over boolean_variable == false

    - by ell
    A comment on this question: Calling A Method that returns a boolean value inside a conditional statement says that you should use !boolean instead of boolean == false when testing conditions. Why? To me boolean == false is much more natural in English and is more explicit. I apologise if this is just a matter of style, but I was wondering if there was some other reason for this preference of !boolean?

    Read the article

  • What should my "code sample" look like?

    - by thesunneversets
    I've just had quite a good phone interview (for a CakePHP-related position, not that it's especially important to the question). The interviewer seemed to be impressed with my resume and personality. At the end, though, he asked me to email him a code sample from my existing work project, "to check you're not secretly a terrible programmer, ha ha!" I'm not too worried that my code can't stand on its own two feet, but I'm very much an intermediate programmer rather than an expert. What obvious pitfalls should I make sure my code sample doesn't fall into, in case they rule me out on the spot? Secondly, and this is probably the harder part of the question to answer, what features in a code sample would be so impressive that they would instantly make you much more favourably inclined towards the programmer? All ideas or suggestions welcomed!

    Read the article

  • How should I take being told that I was wrong?

    - by Chris
    On a fairly important project with short timelines I decided to use SubSonic for straight forward data access. I wired up a handful of forms, created matching database tables and POCO's for each and used SubSonic's simple repository mode for the data access. Everything worked well and I was able to bang these forms out pretty quickly and I moved on to other things. Since that time I have heard that using SubSonic was a 'cowboy move' and that it was implemented 'incorrectly' and that 'the person who used it, didn't even know how to use SubSonic'. What I would like to know is, how should I take this? There were and still are no standards for data access at this company, so there is no violation of a standard. The forms worked exactly as requested and saved the data to the database correctly. And with only spending a few days on the forms instead of weeks, saved a lot of time which was used for other functionality in the project. So in light of all of this, I am confused as to what was 'incorrect'. Am I missing something here? Thanks for your answers.

    Read the article

  • Is '@' Error Suppression a Valid Technique for Testing for an Optional Array Key?

    - by MikeSchinkel
    Rarst and I were debating offline about the use of the '@' error suppression operator in PHP, specifically for use to test for existence of "optional" array keys, i.e. array keys that are being used as a switch here a their lack of existence in the array is functionally equivalent to the array having the key with a value equaling false. Here is pseudo-code for this scenario: function do_something( $args = array() ) { if ( @$args['switch'] ) { // Do something with this switch } // continue on... } vs. this approach: function do_something( $args = array() ) { if ( ! empty( $args['switch'] ) && $args['switch'] ) { // Do something with this switch } // continue on... } Of course in most use-cases, suppressing errors would not be A Good Thing(tm). However in this use-case where an array is passed with an optional element, it seems to me that it is actually a very good technique but I could be wrong and would like to hear other's opinions on the subject before I make up my mind. I do know that there are alleged performance hits for using the former approach but I'd like to know how they compare with the alternative and if they performance hits really matter in real world scenarios? P.S. I decided to post this because, after debating this offline with Rarst, he asked a more general question here on Programmers but didn't actually give a detailed example of the specific use-case we were debating. And since I'm pretty sure he'll want to use the out-of-context answers on that other question as justification for why the above is "bad" I decided I needed to get opinions on this specific use-case.

    Read the article

  • Is extensive documentation a code smell?

    - by Griffin
    Every library, open-source project, and SDK/API I've ever come across has come packaged with a (usually large) documentation file, and this seems contradictory to the wide-spread belief that good code needs little to no comments. What separates documentation from this programming methodology? a one to two page overview of a package seems reasonable, but elegant code combined with standard intelisense should have theoretically deprecated the practice of documentation by now IMO. I feel like companies only create detailed documentation and tutorials because its what they've always done. Why should developers have to constantly be searching through online documentation in order to learn how to do things when such information should be intrinsic to the classes, methods and namespaces?

    Read the article

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • What are the standard practices for database access in .net?

    - by Gulshan
    I have seen weird database access practices in .net. I have seen stored procedures for every database tasks. I have seen every database property name is preceded by it's table name. I have seen fully separate layer/.dll for very few or no business logic. I have seen along with ORMs, there are separate data access layer playing the same role. And with them, I have always heard- "These are the standards you have to maintain". So, what are the real standards for data access in .net? What are the rules you follow?

    Read the article

  • Why write clean, refactored code?

    - by Shamal Karunarathne
    Hi programming lovers, This is a question I've been asking myself for a long time. Thought of throwing out it to you. From my experience of working on several Java based projects, I've seen tons of codes which we call 'dirty'. The unconventional class/method/field naming, wrong way of handling of exceptions, unnecessarily heavy loops and recursion etc. But the code gives the intended results. Though I hate to see dirty code, it's time taking to clean them up and eventually comes the question of "is it worth? it's giving the desired results so what's the point of cleaning?" In team projects, should there be someone specifically to refactor and check for clean code? Or are there situations where the 'dirty' codes fail to give intended results or make the customers unhappy? Do feel free to comment and reply. And tell me if I'm missing something here. Thanks.

    Read the article

  • LINQ Style preference

    - by Erin
    I have come to use LINQ in my every day programming a lot. In fact, I rarely, if ever, use an explicit loop. I have, however, found that I don't use the SQL like syntax anymore. I just use the extension functions. So rather then saying: from x in y select datatransform where filter I use: x.Where(c => filter).Select(c => datatransform) Which style of LINQ do you prefer and what are others on your team are comfortable with?

    Read the article

  • Any Practical Alternative to the Signals + Slots model for GUI Programming?

    - by IntermediateHacker
    The majority of GUI Toolkits nowadays use the Signals + Slots model. It was Qt and GTK+, if I am not wrong, who pioneered it. You know, the widgets or graphical objects (sometimes even ones that aren't displayed) send signals to the main-loop handler. The main-loop handler then calls the events, callbacks or slots assigned for that widget / graphical object. There are usually default (and in most cases virtual) event-handlers already provided by the toolkit for handling all pre-defined signals, therefore, unlike previous designs where the developer had to write the entire main-loop and handler for each and every message himself (think WINAPI), the developer only has to worry about the signals he needs to implement new functionality on. Now this design is being used in most modern toolkits as far as I know. There are Qt, GTK+, FLTK etc. There is Java Swing. C# even has a language feature for it ( events and delegates ), and Windows Forms has been developed on this design. In fact, over the last decade, this design for GUI programming has become a kind of an unwritten standard. Since it increases productivity and provides greater abstraction. However, my question is: Is there any alternative design, that is parallel or practical for modern GUI programming? i.e Is the Signals + Slots design, the only practical one in town? Is it feasible to do GUI Programming with any other design? Are any modern (preferably successful and popular) GUI toolkits built on an alternative design?

    Read the article

  • Why the recent shift to removing/omitting semicolons from Javascript?

    - by Jonathan
    It seems to be fashionable recently to omit semicolons from Javascript. There was a blog post a few years ago emphasising that in Javascript, semicolons are optional and the gist of the post seemed to be that you shouldn't bother with them because they're unnecessary. The post, widely cited, doesn't give any compelling reasons not to use them, just that leaving them out has few side-effects. Even GitHub has jumped on the no-semicolon bandwagon, requiring their omission in any internally-developed code, and a recent commit to the zepto.js project by its maintainer has removed all semicolons from the codebase. His chief justifications were: it's a matter of preference for his team; less typing Are there other good reasons to leave them out? Frankly I can see no reason to omit them, and certainly no reason to go back over code to erase them. It also goes against (years of) recommended practice, which I don't really buy the "cargo cult" argument for. So, why all the recent semicolon-hate? Is there a shortage looming? Or is this just the latest Javascript fad?

    Read the article

  • Origins of code indentation

    - by Daniel Mahler
    I am interested in finding out who introduced code indentation, as well as when and where it was introduced. It seems so critical to code comprehension, but it was not universal. Most Fortran and Basic code was (is?) unindented, and the same goes for Cobol. I am pretty sure I have even seen old Lisp code written as continuous, line-wrapped text. You had to count brackets in your head just to parse it, never mind understanding it. So where did such a huge improvement come from? I have never seen any mention of its origin. Apart from original examples of its use, I am also looking for original discussions of indentation.

    Read the article

  • Best practice in setting return value (use else or?)

    - by Deckard
    Whenever you want to return a value from a method, but whatever you return depends on some other value, you typically use branching: int calculateSomething() { if (a == b) { return x; } else { return y; } } Another way to write this is: int calculateSomething() { if (a == b) { return x; } return y; } Is there any reason to avoid one or the other? Both allow adding "else if"-clauses without problems. Both typically generate compiler errors if you add anything at the bottom. Note: I couldn't find any duplicates, although multiple questions exist about whether the accompanying curly braces should be on their own line. So let's not get into that.

    Read the article

  • Studies on code documentation productivity gains/losses

    - by J T
    Hi everyone, After much searching, I have failed to answer a basic question pertaining to an assumed known in the software development world: WHAT IS KNOWN: Enforcing a strict policy on adequate code documentation (be it Doxygen tags, Javadoc, or simply an abundance of comments) adds over-head to the time required to develop code. BUT: Having thorough documentation (or even an API) brings with it productivity gains (one assumes) in new and seasoned developers when they are adding features, or fixing bugs down the road. THE QUESTION: Is the added development time required to guarantee such documentation offset by the gains in productivity down-the-road (in a strictly economical sense)? I am looking for case studies, or answers that can bring with them objective evidence supporting the conclusions that are drawn. Thanks in advance!

    Read the article

  • What is the ideal length of a method?

    - by iPhoneDeveloper
    In object-oriented programming, there is no exact rule on the maximum length of a method , but I still found these two qutes somewhat contradicting each other, so I would like to hear what you think. In Clean Code: A Handbook of Agile Software Craftsmanship, Robert Martin says: The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. Functions should not be 100 lines long. Functions should hardly ever be 20 lines long. and he gives an example from Java code he sees from Kent Beck: Every function in his program was just two, or three, or four lines long. Each was transparently obvious. Each told a story. And each led you to the next in a compelling order. That’s how short your functions should be! This sounds great, but on the other hand, in Code Complete, Steve McConnell says something very different: The routine should be allowed to grow organically up to 100-200 lines, decades of evidence say that routines of such length no more error prone then shorter routines. And he gives a reference to a study that says routines 65 lines or long are cheaper to develop. So while there are diverging opinions about the matter, is there a functional best-practice towards determining the ideal length of a method for you?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >