Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 97/605 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • What's a good approach to adding debug code to your application when you want more info about what's going wrong?

    - by Andrei
    When our application doesn't work the way we expect it to (e.g. throws exceptions etc.), I usually insert a lot of debug code at certain points in the application in order to get a better overview of what exactly is going on, what the values for certain objects are, to better trace where this error is triggered from. Then I send a new installer to the user(s) that are having the problem and if the problem is triggered again I look at the logs and see what they say. But I don't want all this debug code to be in the production code, since this would create some really big debug files with information that is not always relevant. The other problem is that our code base changes, and the next time, the same debug code might have to go in different parts of the application. Questions Is there a way to merge this debug code within the production code only when needed and have it appear at the correct points within the application? Can it be done with a version control system like git so that all would be needed is a git merge? P.S. The application I'm talking about now is .NET, written in C#.

    Read the article

  • Embedded linux Development learning

    - by user1797375
    I come from a windows background and i am proficient with the .net platform. For work, i need to bring up a custom embedded system platform. We have bought the pandaboard ES as the test platform. The application is to stream images over the wifi. If you think about it, we are building something similar to a netgear router - the only difference being when you log into the device it serves images. Because my background is in windows i am not quite sure how to start off with embedded linux development. in reading through various sites i have come to the conclusion that going to linux as development host is the best option. Can some one point to me in the right direction regarding the set up. I have a windows machine that will be used for development purposes. I can either do a virtual box or setup a partition for linux. But the finer details are what throwing me off..what i need to know is 1) once i install linux what other software do I need - Code blocks, 2) what about toolchain 3) How to debug - through serial port ? 4) Is there a way to send the image built directly to the CF card? Thanks

    Read the article

  • Why is String Templating Better Than String Concatenation from an Engineering Perspective?

    - by stephen
    I once read (I think it was in "Programming Pearls") that one should use templates instead of building the string through the use of concatenation. For example, consider the template below (using C# razor library) <in a properties file> Browser Capabilities Type = @Model.Type Name = @Model.Browser Version = @Model.Version Supports Frames = @Model.Frames Supports Tables = @Model.Tables Supports Cookies = @Model.Cookies Supports VBScript = @Model.VBScript Supports Java Applets = @Model.JavaApplets Supports ActiveX Controls = @Model.ActiveXControls and later, in a separate code file private void Button1_Click(object sender, System.EventArgs e) { BrowserInfoTemplate = Properties.Resources.browserInfoTemplate; // see above string browserInfo = RazorEngine.Razor.Parse(BrowserInfoTemplate, browser); ... } From a software engineering perspective, how is this better than an equivalent string concatentation, like below: private void Button1_Click(object sender, System.EventArgs e) { System.Web.HttpBrowserCapabilities browser = Request.Browser; string s = "Browser Capabilities\n" + "Type = " + browser.Type + "\n" + "Name = " + browser.Browser + "\n" + "Version = " + browser.Version + "\n" + "Supports Frames = " + browser.Frames + "\n" + "Supports Tables = " + browser.Tables + "\n" + "Supports Cookies = " + browser.Cookies + "\n" + "Supports VBScript = " + browser.VBScript + "\n" + "Supports JavaScript = " + browser.EcmaScriptVersion.ToString() + "\n" + "Supports Java Applets = " + browser.JavaApplets + "\n" + "Supports ActiveX Controls = " + browser.ActiveXControls + "\n" ... }

    Read the article

  • Reflection: Is using reflection still "bad" or "slow"? What has changed with reflection since 2002?

    - by blesh
    I've noticed when dealing with Expressions or Expression Trees I'm using reflection a lot to set and get values in properties and what have you. It has occurred to me that the use of reflection seems to be getting more and more common. Things like DataAnotations for validation, Attribute heavy ORMs, etc. Have me wondering: What has changed since the days years and years ago when I used to be told to avoid reflection if at all possible? So what, if anything has changed? Is it just the speed of the machines? Have there been changes to the framework to speed up reflection? Or has nothing really changed? Is it still "bad" or "slow" to use reflection? EDIT: To clarify my question a little.

    Read the article

  • An XML file or Database?

    - by webnoob
    I am re-writing a section of my site and am trying to decide how much of a rewrite this will be. At the moment I have a web service feed that generates an xml once per day. I then use this xml file on my website to generate the general structure. I am trying to decide if this information should be located in the database or stay in the xml file. The file can range from 4mb - 12mb. The files depth can go on and on so I have to recurse to find the data I want. I use the .NET serializer classes and store the serialized file in a global variable to avoid re-serializing it each time the page is loaded. My reasons for thinking a database would be better are: I would know exactly where I am in the file by using an internal ID so I wouldn't have to recurse the file to get information. I wouldn't have to load / serialize the XML and could just use my already open database connections. Searching for the data in the file would be quicker(?) as I would just perform an SQL query rather than re-cursing the file. Has anyone got any ideas which is better and which option uses more resources on the server or be quicker? EDIT: The file is read every time the web page is loaded (although only serialized once). It isn't written to by standard users (only by an admin task that runs in the middle of the night). This is my initial investigation before mocking up.

    Read the article

  • Is there a good resource for learning Rails in depth? [closed]

    - by Kocheez
    I've been developing rails applications for about 6 months now (I was originally a java developer) and I'm getting familiar enough with building applications that I want to take my rails knowledge to the next level. The majority of books and learning materials I've found deal mostly with "how to use rails" rather than "how it works". I was wondering if there are any good resources for getting a really in depth understanding of the framework, such as how modules and classes are loaded, the underlying architecture, how servers interact, etc... Any tips on learning more would be greatly appreciated

    Read the article

  • Please help me, I need some solid career advice, put myself in a dumb situation

    - by Kevin
    Hi, First off, I just want to say thank you in advance for looking at my question and would really value your input on this subject. My core question is how do I proceed from the following predicament. I will be honest with you, I wasted my College Experience. I slacked off and didn't take any of my comp sci classes that seriously, somehow i still got out with a 3.25 GPA. But truth be told I learned nothing. I befriended most of my professors who went pretty lenient on me in terms of grading. However, I basically came out of College knowing how to program a simple calculator in VB.Net. I was (to my great surprise) hired by a very large respected company in Denver as a Junior developer. Well the long and the short of it is that I knew so little about programming that I quickly became the office pariah and was almost fired due to my incompetence. It has been 8 months now and I feel I have learned some basic things and I am not as picked on as I used to be by the other developers. However, everyone hates me and the first few months have given the other developers a horrible perception of me. I am no longer afraid of code or learning, but I have put my self in the precarious position of being the scapegoat of our department. I hate going to work every day because no one there is my friend and pretty much everyone is hostile to me. What should I do? Any advice?

    Read the article

  • How do I manage the technical debate over WCF vs. Web API?

    - by Saeed Neamati
    I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services (Wikipedia) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage. In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?

    Read the article

  • RDF and OWL: Have these delivered the promises of the Semantic Web?

    - by Dark Templar
    These days I've been learning a lot about how different scientific fields are trying to move their data over to the Semantic Web in order to "free up data from being stored in isolated silos". I read a lot about how these fields are saying how their efforts are implementing the "visions" of the Semantic Web. As a learner (and from purely a learning perspective) I was curious to know why, if semantic technology is deemed to be so powerful, the efforts have been around for years but myself and a lot of people I know have never even heard of it until very recently? Also, I don't come across any scholarly articles deeming "oh, our inferencing engine was able to make such and such discovery, which is helping us pave our way to solving...." etc. It seems that there are genuine efforts across different institutions, fields, and disciplines to shift all their data to a "semantic" format, but what happens after all that's been done? All the ontologies have been created/unified, and then what?

    Read the article

  • Security aspects of an ASP.NET that can be pointed out to the client

    - by Maxim V. Pavlov
    I need to write several passages of text in an offer to the client about the security layer in ASP.NET MVC web solution. I am aware of security that comes along with MVC 3 and an improvements in MVC 4. But all of them are non conceptual, except for AntiForgeryToken (AntiXSS) and built-in SQL Injection immunity (with a little of encoding needed by hand). What would be the main point of ASP.NET security I can "show off" in an offer to the client?

    Read the article

  • Getting started as a programmer -- school or self-study?

    - by Cyberherbalist
    My son who has is married with two small children has decided that he needs a change of career, and is considering getting into programming. He would do well in the field, I am certain, but I am uncertain how to advise him with regards to a lengthy course of schooling, or just try to learn 'on the job", so to speak. I suspect that if he doesn't ultimately get at least an associate degree in program (like his old man), that his job possibilities are going to be very constrained. This isn't the Dot-Com Bubble, after all, when they'd hire you if you could spell c-o-m-p-u-t-e-r because they needed bodies and the ability to fog a mirror wasn't quite enough. Should he go for a full program at the university, a two-year program (he already has a 2-year degree in video production, so he's got the general ed requirements whipped), or does anyone think self-study alone might be enough? To get started, anyway. I started back in 1987 with COBOL and a 2-year degree, which seemed the minimum at the time, but perhaps things are different now?

    Read the article

  • What would a start-to-finish development procedure would look like?

    - by Tom Busby
    I have a problem that my developer friends share. We recently left university and find ourselves either end up working for a firm which already has good procedures (TDD, automated testing, proper agile development, etc) or working for a firm which doesn't. I want to learn some of these vital skills and get a grip on what a complete start-to-finish development procedure would look like. What differences would be between a smaller project, and a long term project with many team members.

    Read the article

  • Shortcomings of using dynamic types in C#

    - by Karthik Sreenivasan
    I have been recently studying more on the dynamic types in C#. With some examples I understood once the code is compiled, it does not need to be recompiled again but can be executed directly. I feel the flexibility provided by the keyword to actually be able to change data type at will is a great advantage. Question, Are there any specific shortcomings apart from wrong dynamic method calls which throw run time exceptions which developers must know before starting the implementation.

    Read the article

  • Dissection of Google indexing services

    - by Pankaj Upadhyay
    There are more than a few questions that hop into mind when someone thinks about Google's indexing services. Jeff Atwood wrote about them at The Elephant in the Room: Google Monoculture and Trouble In the House of Google. I have two questions: How does google index dynamic websites? This site has dynamic pages, QUESTIONS, TAGS, USERS, BADGES, UNANSWERED, ASK QUESTION. The content of these pages is dynamically generated, therefore we access the dynamic content and not the physical files on the server. But how does Google shows every question of the site or other dynamic websites? What does Google index and keep on its servers? Does it copy the complete page into its server or just the title, meta tags and body?

    Read the article

  • What are Web runtime environments and programming languages

    - by Bradly Spicer
    I've been looking into the details behind these two different categories: Web runtime environments Web application programming languages I believe I have the correct information and have phrased it correctly but I am unsure. I have been searching for a while but only find snippets of information or what I can see as useless information (I could be wrong). Here are my descriptions so far: Web runtime environments - A Run-time environment implements part of the core behaviour of any computer language and allows it to be modified via an API or embedded domain-specific language. A web runtime environment is similar except it uses web based languages such as Java-script which utilises the core behaviour a computer language. Another example of a Run-time environment web language is JsLibs which is a standable JavaScript development runtime environment for using JavaScript as a general all round scripting language. JavaScript is often used to create responsive interfaces which improve the user experience and provide dynamic functionality without having to wait for the server to react and direct to another page. Web application programming languages - A web application program language is something that mimics a traditional desktop application within a web page. For example, using PHP you can create forms and tables which use a database similar to that of Microsoft Excel. Some of the other languages for web application programming are: Ajax Perl Ruby Here are some of the resources used: http://en.wikipedia.org/wiki/Web_application_development http://code.google.com/p/jslibs/ I would like some confirmation that the descriptions I have created are correct as I am still slightly unsure as to whether I have hit the nail on the head.

    Read the article

  • What is meant by "Repeat Business" ?

    - by vinoth
    Repeat Business obviously happens because the company has a great product or a great service. In the software industry, do companies make the code base complex enough so that the maintenance comes back to them? I have heard of cases where companies say "ya this code base has minor errors, let's ship them anyway, and let the customers come back for another change request on these". Then they would sometimes charge the customer for that. This question is specific to the software services industry. Do these things happen in the real world? I am trying to understand the business process.

    Read the article

  • Why "object reference not set to an instance of an object" doesn't tell us which object?

    - by Saeed Neamati
    We're launching a system, and we sometimes get the famous exception NullReferenceException with the message Object reference not set to an instance of an object. However, in a method where we have almost 20 objects, having a log which says an object is null, is really of no use at all. It's like telling you, when you are the security agent of a seminar, that a man among 100 attendees is a terrorist. That's really of no use to you at all. You should get more information, if you want to detect which man is the threatening man. Likewise, if we want to remove the bug, we do need to know which object is null. Now, something has obsessed my mind for several months, and that is: Why .NET doesn't give us the name, or at least the type of the object reference, which is null?. Can't it understand the type from reflection or any other source? Also, what are the best practices to understand which object is null? Should we always test nullability of objects in these contexts manually and log the result? Is there a better way?

    Read the article

  • What are benefit/drawbacks of classifying defects during a peer code review

    - by DXM
    About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash.

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • How can I save my university's Computer Science & Engineering department? [closed]

    - by Blake
    I'm currently pursuing a B.S. in Computer Engineering at the University of Florida, and we're having a bit of a problem right now... The state recently passed a budget plan that cuts funding for higher education in Florida. The dean of UF's College of Engineering decided that the best way for us to absorb the blow is by executing the following plan: All of the Computer Engineering Degree programs, BS, MS and PhD, would be moved from the Computer & Information Science and Engineering Dept. to the Electrical and Computer Engineering Dept. along with most of the advising staff. Roughly half of the faculty would be offered the opportunity to move to Electrical/Computer Eng., Biomedical Eng., or Industrial/Systems Eng. Staff positions in CISE which are currently supporting research and graduate programs would be eliminated. The activities currently covered by TAs would be reassigned to faculty and the TA budget for CISE would be eliminated. Any faculty member who wishes to stay in CISE may do so, but with a revised assignment focused on teaching and advising. In short: our department (at least as we know it) is being decimated. Computer & Information Sciences & Engineering (one of 9 departments in the College of Engineering) is taking more than 50% of the cuts. If you're interested in reading the full proposal, you can access it here. A vast, VAST majority of the students and faculty in the department are vehemently opposed to this plan, however the dean is already taking measures to implement it. This is the only proposal on the table right now, and she has not entertained our requests for alternatives. She sees it as an obvious (albeit drastic) solution to our budget problem, citing that many other universities have combined Computer and Electrical Engineering departments. I'll bet those universities didn't have to eliminate an established department to get there, though. The budget goes into effect July 1, 2012 (this is non-negotiable), and the dean's proposal is currently set to be finalized some time next week. We don't have much time! My question to everyone here is this: Are we overreacting to this plan, or are we justified? And could you explain why or why not? It's obvious that CISE students will resist any cuts to our department, but I'm curious to see what other people in the field have to say. Any feedback is greatly appreciated. I will select the answer that saves our department. Just kidding, I'll pick the one that best explains why this is a good or bad decision for the dean to make. Please note that anything you say can and will be used to further our cause (and we might track you down if you provide a compelling argument against us).

    Read the article

  • Can I configure a visual difference view with the notifications provided by TFS?

    - by John Kaster
    I have TFS sending me alerts whenever someone on my team checks in code. (I had to create notification rules for every project, but that's just a sidebar complaint in this question.) These alerts provided some information on who checked in the files when, and what files have changed, with urls to view details in a browser. The thing that baffles me is that I can't just click on the source file and see a visual diff of the changes. There's no link that will auto-launch a diff in Visual Studio (using a custom protocol) from there either. Is there a way to configure TFS to provide a visual diff of the changes to the file that was checked in via this notification UI?

    Read the article

  • Difference between Windows and Linux development environments?

    - by Ryan
    I have an interview coming up soon for a Business Analyst position and the recruiter mentioned some feedback from a prior candidate that was interviewed who said the interviewers asked him what the difference between a Windows and Linux development environment was. Are there some high level things I need to be aware of from a business point of view when working with a development team or designing an application on Windows vs Linux?

    Read the article

  • Are technical books easy to read on the Kindle (or other small screens) [closed]

    - by Peter Recore
    Possible Duplicate: eBook editions of programming books I am considering getting a kindle or other e reader. (Kindle is my top choice for the eink vs lcd factor) I have been able to try reading fiction on a Kindle, and it seemed pretty nice, even with the small screen. However, most books I buy are actually technical books, which tend to have figures, code samples, and other odd things. How well do the various ereaders handle books like this? In particular, does the kindle render code samples or figures in an easy to read way?

    Read the article

  • Does using GCC specific builtins qualify as incorporation within a project?

    - by DavidJFelix
    I understand that linking to a program licensed under the GPL requires that you release the source of your program under the GPL as well, while the LGPL does not require this. The terminology of the (L)GPL is very clear about this. #include "gpl_program.h" means you'd have to license GPL, because you're linking to GPL licensed code. And #include "lgpl_program.h" means you're free to license however you want, so that it doesn't explicitly prohibit linking to LGPL source. Now, my question about what isn't clear is: [begin question] GCC is GPL licensed, compiling with GCC, does not constitute "integration" into your program, as the GPL puts it; does using builtin functions (which are specific to GCC) constitute "incorporation" even though you haven't explicitly linked to this GPL licensed code? My intuition tells me that this isn't the intention, but legality isn't always intuitive. I'm not actually worried, but I'm curious if this could be considered the case. [end question] [begin aside] The reason for my equivocation is that GCC builtins like __builtin_clzl() or __builtin_expect() are an API technically and could be implemented in another way. For example, many builtins were replicated by LLVM and the argument could be made that it's not implementation specific to GCC. However, many builtins have no parallel and when compiled will link GPL licensed code in GCC and will not compile on other compilers. If you make the argument here that the API could be replicated by another compiler, couldn't you make that identical claim about any program you link to, so long as you don't distribute that source? I understand that I'm being a legal snake about this, but it strikes me as odd that the GPL isn't more specific. I don't see this as a reasonable ploy for proprietary software creators to bypass the GPL, as they'd have to bundle the GPL software to make it work, removing their plausible deniability. However, isn't it possible that if builtins don't constitute linking, then open source proponents who oppose the GPL could simply write a BSD/MIT/Apache/Apple licensed product that links to a GPL'd program and claim that they intend to write a non-GPL interface that is identical to the GPL one, preserving their BSD license until it's actually compiled? [end aside] Sorry for the aside, I didn't think many people would follow why I care about this if I'm not facing any legal trouble or implications. Don't worry too much about the hypotheticals there, I'm just extrapolating what either answer to my actual question could imply.

    Read the article

  • How do you pronounce the '...' operator

    - by Uri
    Now, in c++ '...' became a first class operator. In speech, how do you pronounce it? So far I've heard: dot dot dot triple dot ellipsis related: Is it OK to replace ... with ellipsis in writing? e.g. "The ellipsis operator expands the pack" EDIT (clarification): We are all aware that '...' as a punctuation mark is indeed called ellipsis. But in the context of C++ we don't pronounce the names of the punctuation mark. For example, the '&' operator, depends on the context is pronounced as 'and', 'bitwise and', 'address of', 'logical and' (when && is used), or 'reference'. It is rarely pronounced as 'ampersand'. In speeches, I've a feeling that 'dot dot dot' is used more often. For example: http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Variadic-Templates-are-Funadic (an excellent presentation about variadic templates). On the other hand, 'dot dot dot' is awkward hard to pronouce ('d' and 't' are both pronounce with the tongue). Can we pronounce it 'unpack'?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >