Search Results

Search found 30111 results on 1205 pages for 'best practices analyzer'.

Page 43/1205 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • My Last "Catch-Up" Post for 2010 Content

    - by KKline
    I did a lot of writing in 2010. Unfortunately, I didn't do a good job of keeping all of that writing equally distributed throughout all of the channels where I'm active. So here are a few more posts from my blog, put on-line during the months of November and December 2010, that I didn't get posted here on SQLBlog.com: 1. It's Time to Upgrade! So many of my customers and many of you, dear readers, are still on SQL Server 2005. Join Kevin Kline , SQL Server MVP and SQL Server Technology Strategist...(read more)

    Read the article

  • How should I make progress further as a programmer?

    - by mushfiq
    Hello, I have just left my college after doing graduation in computer engineering,during my college life I tried to do some freelancing in local market.I succeeded in the last year and earned some small amounts based on joomla,wordpress and visual basic based job.I had some small projects on php,mysql also. After finishing my undergrad life,I sat for an written test for post of python programmer and luckily I got the job and is working there(Its a small software firm do most of the task in python).Day by day I have gained some experience with core python. Meanwhile an USA based web service firm called me for the interview and after finishing three steps(oral+mini coding project+final oral)they selected me(i was wondered!).And I am going to join their with in few days.There I have to work in python(based on Django framework,I know only basic of this framework). My problem is when I started to work with python simultaneously I worked in Odesk as a wordpress,joomla,drupal,php developer. Now a days I am feeling that I am getting "Jack of all trades master of none". My current situation is i am familiar with several popular web technologies but not an expert.I want to make myself skilled. How should I organize myself to be a skilled web programmer?

    Read the article

  • Is writing software in the absence of requirements a skill to possess or a situation I should avoid?

    - by Brian Reindel
    I find that some software developers are very adept at this, and often times are praised for their ability to deliver a working concept with abstract requirements. Frankly, this drives me crazy, and I don't like "making it up" as I go. I used to think this was problematic, but I've started to sense a shift, and I'm wondering if I need to adjust my thought (and programming) process when given very little direction. Should I begin to acquire this ability as a skill, or stick to the idea that requirement's gathering and business rules are the first priority?

    Read the article

  • Knowing so much but application is a problem?

    - by Moaz ELdeen
    In my work, my friends always tell me, you know so much about computer science, electronics engineering,..etc. But I have difficulty in applying them and my code is crap. How to solve that problem? Will I be better or programming isn't my career? For example, yes I know OCTree that is used for space partitioning in games and it is used for optimization, did I implement it? No, but I know about it in principle.. Do I know algorithms like Sorting, Searching,..etc? Yes, and I know them pretty well, but didn't implement them.. When I get a task, I struggle in applying the things that I know...

    Read the article

  • Is is common to use the command pattern for property get/sets?

    - by k rey
    Suppose I have a controller class with a bunch of properties. Each time a property is changed, I would like to update the model. Now also suppose that I use the command pattern to perform model updates. Is it common to use command classes within property get and sets of the controller class or is there a better way? Here is an example of what I am currently using: class MyController { private int _myInt; public int MyInt { get { return _myInt; } set { MyCommand cmd = new MyCommand(); cmd.Argument = _myInt; cmd.Execute(); // Command object updates the model } } }

    Read the article

  • Is it good to review programs with seniors and boss even if it is working fine?

    - by Himanshu
    In my company, before delivery of any project, my boss asks my seniors to review programs written by me or other team members or sometimes boss also sits with us for review. I think it is a good way of getting knowledge, but sometimes when programs are working fine, they don't work same after review and I need to look again into my program. They says that review helps to optimize program and query execution, but can we prefer optimization over actual functioning of program?

    Read the article

  • How do you author code

    - by garbagecollector
    This is something I was never taught. I have seen alot of different types of authoring styles. I code primarily in Java and Python. I was wondering if there was a standard authoring style or if everything is freestyle. Also if you answer would you mind attaching the style you use to author files that your create at home or at work. I usually just go @author garbagecollector @company garbage inc.

    Read the article

  • Getting solutions off the internet. Bad or Good? [closed]

    - by Prometheus87
    I was looking on the internet for common interview questions. I came upon one that was about finding the occurrences of certain characters in an array. The solution written right below it was in my opinion very elegant. But then another thought came to my mind that, this solution is way better than what came to my mind. So even though I know the solution (that most probably someone with a better IQ had provided) my IQ was still the same. So that means that even though i may know the answer, it still wasn't mine. hence if that question was asked and i was hired upon my answer to that question i couldn't reproduce that same elegance in my other ventures within the organization My question is what do you guys think about such "borrowed intelligence"? Is it good? Do you feel that if solutions are found off the internet, it makes you think in that same more elegant way?

    Read the article

  • Default values - are they good or evil?

    - by Andrew
    The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? Would really appreciate any input. Cheers. EDIT: If I am using the default values as a way to cut the corners during the development - and if the corners cutting results in a bugs and issues - what is the methodology to recover from these issues?

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • How to handle compensation issue

    - by Ali
    I consider myself an expert Software Developer. Recently, I noticed my current company posted a new job through a recruting firm requiring half experience than I have and even lesser set of skills. However, they are offering the same salary as my current salary. When I joined my current company a year ago, they declined to pay my asking salary. My evaluations are good and there are critical projects in the pipeline where my involvement is crucial for their success. I'm little confused on how to handle this situation. I don't want to come across threatning or any thing like that.

    Read the article

  • How can a solo programmer become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Is it good practice to use functions just to centralize common code?

    - by EpsilonVector
    I run across this problem a lot. For example, I currently write a read function and a write function, and they both check if buf is a NULL pointer and that the mode variable is within certain boundaries. This is code duplication. This can be solved by moving it into its own function. But should I? This will be a pretty anemic function (doesn't do much), rather localized (so not general purpose), and doesn't stand well on its own (can't figure out what you need it for unless you see where it is used). Another option is to use a macro, but I want to talk about functions in this post. So, should you use a function for something like this? What are the pros and cons?

    Read the article

  • What are the problems with a relatively large common library?

    - by Sam Pearson
    As long as the code in the base library is as loosely coupled as splitting it up into separate libraries, what's the problem? In general, having a lot of assemblies composing a .NET solution is painful. Plus, when code in one solution needs to be shared, it can just be added to the common library, rather than deciding which common library it should be added to or creating yet another library. edit: the question comes to me after using Smalltalk for a bit, where all the code is available to use, all the time.

    Read the article

  • Struggling with the Single Responsibility Principle

    - by AngryBird
    Consider this example: I have a website. It allows users to make posts (can be anything) and add tags that describe the post. In the code, I have two classes that represent the post and tags. Lets call these classes Post and Tag. Post takes care of creating posts, deleting posts, updating posts, etc. Tag takes care of creating tags, deleting tags, updating tags, etc. There is one operation that is missing. The linking of tags to posts. I am struggling with who should do this operation. It could fit equally well in either class. On one hand, the Post class could have a function that takes a Tag as a parameter, and then stores it in a list of tags. On the other hand, the Tag class could have a function that takes a Post as a parameter and links the Tag to the Post. The above is just an example of my problem. I am actually running into this with multiple classes that are all similar. It could fit equally well in both. Short of actually putting the functionality in both classes, what conventions or design styles exist to help me solve this problem. I am assuming there has to be something short of just picking one? Maybe putting it in both classes is the correct answer?

    Read the article

  • Programming Interview Question [duplicate]

    - by user136494
    This question already has an answer here: How to prepare yourself for programming interview questions? [duplicate] 6 answers I have an upcoming interview in a couple of days and had a question for you guys. I've heard that programming interviews have whiteboard problems where you solve a simple problem on a whiteboard. My question to you is? How many whiteboard problems do you have to solve? Is there more than 1? What are examples of whiteboard problems? Is FizzBuzz one of them? Where can I find practice problems for them? Anyone know of any good web sites?

    Read the article

  • Oracle support note for Leap Second Hang problem that may result into 100% CPU utilization in Linux environment

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} On or around July 1, 2012, Oracle has become aware of an issue on Linux distributions resulting from the introduction of the leap second; this is causing problems for some customers.  Leap seconds may be introduced at the end of June or December in a calendar year, like 2012, as necessary to maintain time standards. Servers hosting Oracle products which are clients of an NTP server (Network Time Protocol) may be particularly susceptible to this issue as the NTP server is updated. Linux distributions which may be affected include Oracle Enterprise Linux, Red Hat Enterprise Linux, Oracle VM and Oracle Unbreakable Enterprise Kernel. Asianux 2 and 3, based on RHEL 4 and 5, may also be affected. One report of correction to high agent CPU using Note 1472421.1 on SLES11 has also been reported. Not all customers will be affected, but those, who are affected, may observe higher than normal CPU consumption on their Linux environments where JVM's are utilized.  In Oracle Enterprise Manager ( EM ) , this problem can manifest itself as high CPU consumption with the EM Agent process (which runs on a JVM in EM 12c, for instance).  It is possible that the OMS is also affected. We would advise customers to review the description of this problem in MOS Note 1472651.1 and take action if they observe that their environment is affected. Contributed by Andrew Bulloch , Director, Application Systems Management Products

    Read the article

  • Is committing/checking code everyday a good practice?

    - by ArtB
    I've been reading Martin Fowler's note on Continuous Integration and he lists as a must "Everyone Commits To the Mainline Every Day". I do not like to commit code unless the section I'm working on is complete and that in practice I commit my code every three days: one day to investigate/reproduce the task and make some preliminary changes, a second day to complete the changes, and a third day to write the tests and clean it up^ for submission. I would not feel comfortable submitting the code sooner. Now, I pull changes from the repository and integrate them locally usually twice a day, but I do not commit that often unless I can carve out a smaller piece of work. Question: is committing everyday such a good practice that I should change my workflow to accomodate it, or it is not that advisable? ^ The order is more arbitrary and depends on the task, my point was to illustrate the time span and activities, not the exact sequence.

    Read the article

  • Which language and platform features really boosted your coding speed?

    - by Serge
    The question is about delivering working code faster without any regard for design, quality, maintainability, etc. Here is the list of things that help me to write and read code faster: Language: static typing, support for object-oriented and functional programming styles, embedded documentation, short compile-debug-fix cycle or REPL, automatic memory management Platform: "batteries" included (text, regex, IO, threading, networking), thriving community, tons of open-source libs Tools: IDE, visual debugger, code-completion, code navigation, refactoring

    Read the article

  • Could someone break this nasty habit of mine please?

    - by MimiEAM
    I recently graduated in cs and was mostly unsatisfied since I realized that I received only a basic theoretical approach in a wide range of subjects (which is what college is supposed to do but still...) . Anyway I took the habit of spending a lot of time looking for implementations of concepts and upon finding those I will used them as guides to writing my own implementation of those concepts just for fun. But now I feel like the only way I can fully understand a new concept is by trying to implement from scratch no matter how unoptimized the result may be. Anyway this behavior lead me to choose by default the hard way, that is time consuming instead of using a nicely written library until I hit my head again a huge wall and then try to find a library that works for my purpose.... Does anyone else do that and why? It seems so weird why would anyone (including me) do that ? Is it a bad practice ? and if so how can i stop doing that ?

    Read the article

  • Are XML Comments Necessary Documentation?

    - by Bob Horn
    I used to be a fan of requiring XML comments for documentation. I've since changed my mind for two main reasons: Like good code, methods should be self-explanatory. In practice, most XML comments are useless noise that provide no additional value. Many times we simply use GhostDoc to generate generic comments, and this is what I mean by useless noise: /// <summary> /// Gets or sets the unit of measure. /// </summary> /// <value> /// The unit of measure. /// </value> public string UnitOfMeasure { get; set; } To me, that's obvious. Having said that, if there were special instructions to include, then we should absolutely use XML comments. I like this excerpt from this article: Sometimes, you will need to write comments. But, it should be the exception not the rule. Comments should only be used when they are expressing something that cannot be expressed in code. If you want to write elegant code, strive to eliminate comments and instead write self-documenting code. Am I wrong to think we should only be using XML comments when the code isn't enough to explain itself on its own? I believe this is a good example where XML comments make pretty code look ugly. It takes a class like this... public class RawMaterialLabel : EntityBase { public long Id { get; set; } public string ManufacturerId { get; set; } public string PartNumber { get; set; } public string Quantity { get; set; } public string UnitOfMeasure { get; set; } public string LotNumber { get; set; } public string SublotNumber { get; set; } public int LabelSerialNumber { get; set; } public string PurchaseOrderNumber { get; set; } public string PurchaseOrderLineNumber { get; set; } public DateTime ManufacturingDate { get; set; } public string LastModifiedUser { get; set; } public DateTime LastModifiedTime { get; set; } public Binary VersionNumber { get; set; } public ICollection<LotEquipmentScan> LotEquipmentScans { get; private set; } } ... And turns it into this: /// <summary> /// Container for properties of a raw material label /// </summary> public class RawMaterialLabel : EntityBase { /// <summary> /// Gets or sets the id. /// </summary> /// <value> /// The id. /// </value> public long Id { get; set; } /// <summary> /// Gets or sets the manufacturer id. /// </summary> /// <value> /// The manufacturer id. /// </value> public string ManufacturerId { get; set; } /// <summary> /// Gets or sets the part number. /// </summary> /// <value> /// The part number. /// </value> public string PartNumber { get; set; } /// <summary> /// Gets or sets the quantity. /// </summary> /// <value> /// The quantity. /// </value> public string Quantity { get; set; } /// <summary> /// Gets or sets the unit of measure. /// </summary> /// <value> /// The unit of measure. /// </value> public string UnitOfMeasure { get; set; } /// <summary> /// Gets or sets the lot number. /// </summary> /// <value> /// The lot number. /// </value> public string LotNumber { get; set; } /// <summary> /// Gets or sets the sublot number. /// </summary> /// <value> /// The sublot number. /// </value> public string SublotNumber { get; set; } /// <summary> /// Gets or sets the label serial number. /// </summary> /// <value> /// The label serial number. /// </value> public int LabelSerialNumber { get; set; } /// <summary> /// Gets or sets the purchase order number. /// </summary> /// <value> /// The purchase order number. /// </value> public string PurchaseOrderNumber { get; set; } /// <summary> /// Gets or sets the purchase order line number. /// </summary> /// <value> /// The purchase order line number. /// </value> public string PurchaseOrderLineNumber { get; set; } /// <summary> /// Gets or sets the manufacturing date. /// </summary> /// <value> /// The manufacturing date. /// </value> public DateTime ManufacturingDate { get; set; } /// <summary> /// Gets or sets the last modified user. /// </summary> /// <value> /// The last modified user. /// </value> public string LastModifiedUser { get; set; } /// <summary> /// Gets or sets the last modified time. /// </summary> /// <value> /// The last modified time. /// </value> public DateTime LastModifiedTime { get; set; } /// <summary> /// Gets or sets the version number. /// </summary> /// <value> /// The version number. /// </value> public Binary VersionNumber { get; set; } /// <summary> /// Gets the lot equipment scans. /// </summary> /// <value> /// The lot equipment scans. /// </value> public ICollection<LotEquipmentScan> LotEquipmentScans { get; private set; } }

    Read the article

  • What are some reasonable stylistic limits on type inference?

    - by Jon Purdy
    C++0x adds pretty darn comprehensive type inference support. I'm sorely tempted to use it everywhere possible to avoid undue repetition, but I'm wondering if removing explicit type information all over the place is such a good idea. Consider this rather contrived example: Foo.h: #include <set> class Foo { private: static std::set<Foo*> instances; public: Foo(); ~Foo(); // What does it return? Who cares! Just forward it! static decltype(instances.begin()) begin() { return instances.begin(); } static decltype(instances.end()) end() { return instances.end(); } }; Foo.cpp: #include <Foo.h> #include <Bar.h> // The type need only be specified in one location! // But I do have to open the header to find out what it actually is. decltype(Foo::instances) Foo::instances; Foo() { // What is the type of x? auto x = Bar::get_something(); // What does do_something() return? auto y = x.do_something(*this); // Well, it's convertible to bool somehow... if (!y) throw "a constant, old school"; instances.insert(this); } ~Foo() { instances.erase(this); } Would you say this is reasonable, or is it completely ridiculous? After all, especially if you're used to developing in a dynamic language, you don't really need to care all that much about the types of things, and can trust that the compiler will catch any egregious abuses of the type system. But for those of you that rely on editor support for method signatures, you're out of luck, so using this style in a library interface is probably really bad practice. I find that writing things with all possible types implicit actually makes my code a lot easier for me to follow, because it removes nearly all of the usual clutter of C++. Your mileage may, of course, vary, and that's what I'm interested in hearing about. What are the specific advantages and disadvantages to radical use of type inference?

    Read the article

  • Is it a bad idea to list every function/method argument on a new line and why?

    - by dgnball
    I work with someone who, every time they call a function they put the arguments on a new line e.g. aFunction( byte1, short1, int1, int2, int3, int4, int5 ) ; I find this very annoying as it means the code isn't very compact, so I have to scan up and down more to actually make any sense of the logic. I'm interested to know whether this is actually bad practice and if so, how can I persuade them not to do it?

    Read the article

  • How do you track bugs in your personal projects?

    - by bedwyr
    I'm trying to decide if I need to reassess my defect-tracking process for my home-grown projects. For the last several years, I really just track defects using TODO tags in the code, and keeping track of them in a specific view (I use Eclipse, which has a decent tagging system). Unfortunately, I'm starting to wonder if this system is unsustainable. The defects I find are typically associated with a snippet of code I'm working on; bugs which are not immediately understood tend to be forgotten, or ignored. I wrote an application for my wife which has had a severe defect for almost 9 months, and I keep forgetting to fix it. What mechanism do you use to track defects in your personal projects? Do you have a specific system, or a process for prioritizing and managing them?

    Read the article

  • Database Maintenance Scripting Done Right

    - by KKline
    I first wrote about useful database maintenance scripts on my SQLBlog account way back in 2008. Hmmm - now that I think about it, I first wrote about my own useful database maintenance scripts in a journal called SQL Server Professional back in the mid-1990's on SQL Server v6.5 or some such. But I digress... Anyway, I pointed out a couple useful sites where you could get some good scripts that would take care of preventative maintenance on your SQL Server, such as index defragmentation, updating...(read more)

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >