Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 127/663 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • How to become a professional web developer from a C/C++ programmer?

    - by user1050165
    I am new here. This is my first post on stackoverflow. I am currently a high school student and know how to use Pascal and C/C++ to take part in competitions such as the Informatics in Olympiad. I have learnt data structure and many algorithms to solve various kinds of problems. Now, I want to move on to become a web developer. However, I know web development is quite different from competitive programming. To make a web application, I have to master HTML, Database, Backend programming etc. But these are all look like separate pieces of information. I don't know where to start and what order should I follow. Anybody who can give a comprehensive list of learning points? I know there are HTML, Ruby on Rails, CSS and Javascript. What else? More importantly, can someone give a brief outline of their relationship? I hope I can get help from you asap. Thanks!

    Read the article

  • Code Generation and IDE vs writing per Hand

    - by sytycs
    I have been programming for about a year now. Pretty soon I realized that I need a great Tool for writing code and learned Vim. I was happy with C and Ruby and never liked the idea of an IDE. Which was encouraged by a lot of reading about programming.[1] However I started with (my first) Java Project. In a CS Course we were using Visual Paradigm and encouraged to let the program generate our code from a class diagram. I did not like that Idea because: Our class diagram was buggy. Students more experienced in Java said they would write the code per hand. I had never written any Java before and would not understand a lot of the generated code. So I took a different approach and wrote all methods per Hand (getter and Setter included). My Team-members have written their parts (partly generated by VP) in an IDE and I was "forced" to use it too. I realized they had generated equal amounts of code in a shorter amount of time and did not spend a lot of time setting their CLASSPATH and writing scripts for compiling that son of a b***. Additionally we had to implement a GUI and I dont see how we could have done that in a sane matter in Vim. So here is my Problem: I fell in love with Vim and the Unix way. But it looks like for getting this job done (on time) the IDE/Code generation approach is superior. Do you have equal experiences? Is Java by the nature of the language just more suitable for an IDE/Code generated approach? Or am I lacking the knowledge to produce equal amounts of code "per Hand"? [1] http://heather.cs.ucdavis.edu/~matloff/eclipse.html

    Read the article

  • Caching large amount of ajax returned objects

    - by ofcapl
    I'm building an application which fetches large amount of items with ajax requests via other application API. It returns me 6k - 30k js objects which are used multiple times across various application views (sorting, filtering etc.). I would like to avoid querying API every time for such big list so I decided to cache this data somehow. I was thinking about various solutions: saving it to localstorage, using some caching library (e.g. locachejs), storing in js var. I'm not an expert so I would like to hear Your suggestions about each (or one of these) solution, about its pros and cons. Every help will be very appreciated.

    Read the article

  • What do you think of a performance engineer should have?

    - by Vance
    I believe performance tuning (or even testing) is one the most challenging for an engineer. Well, in lots of company, this is the lowest priority than others "important" thing. My purpose of opening this post is to know what do you think*good* performance engineer should have. I can list some things like: Solid database,programming knowledge. Do single thread performance testing. Good knowledge of using the load generator tools to simulate the concurrent loads. Use different tools to monitor/measure the app/db server performance status Understand and can debug the codes. Even tune the codes. Any more ideas are always appreciated!

    Read the article

  • Where to draw the line between development-led security and administration-led security?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers who do not listen to this attribute, strange hacks *\ are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingenuity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • Why don’t UI frameworks use generics?

    - by romkyns
    One way of looking at type safety is that it adds automatic tests all over your code that stop some things breaking in some ways. One of the tools that helps this in .NET is generics. However, both WinForms and WPF are generics-free. There is no ListBox<T> control, for example, which could only show items of the specified type. Such controls invariably operate on object instead. Why are generics and not popular with UI framework developers?

    Read the article

  • I'd like to learn how to do mobile development - which competitions can I join to help me learn?

    - by Oscar
    I want to start learning mobile device development, but I am someone that gets much more motivated when there is some goal to reach. Because of that, I would like to join a competition. I know about Microsoft Imagine Cup, which is a very nice competition. Are there any another mobile development competition with a deadline in the next 6~8 months? I have been googling for them, but I could not find any, maybe someone knows about something that I couldn't find.

    Read the article

  • Should I use C style in C++?

    - by c.hughes
    As I've been developing my position on how software should be developed at the company I work for, I've come to a certain conclusion that I'm not entirely sure of. It seems to me that if you are programming in C++, you should not use C style anything if it can be helped and you don't absolutely need the performance improvement. This way people are kept from doing things like pointer arithmetic or creating resources with new without any RAII, etc. If this idea was enforced, seeing a char* would possibly be a thing of the past. I'm wondering if this is a conclusion others have made? Or am I being too puritanical about this?

    Read the article

  • Visual Studio 2013 - Express for Web vs Professional [duplicate]

    - by TimS
    This question already has an answer here: Visual Studio 2012 - Express vs Professional 2 answers What are the main differences and limitations between Visual Studio 2013 Express and Visual Studio 2013 Professional? I'm specifically interested in information related to the Web edition. I need to be able to develop ASP.Net applications, Windows Services and console applications - not Desktop or Phone apps. Microsoft seems to hide this information well and I can only seem to find information relating to 2012 products and earlier.

    Read the article

  • What about introduction to programming with C# via LINQPad?

    - by Gulshan
    From different questions/answers/articles in this and some other sites, I got the idea that the introductory language for programming should be- High level Less verbose C# is one of the heavily used high level languages being used these days. It's also multi-paradigm and descendant of C, the lingua-franca of all programming languages. So, I think it has the potential to be the introductory programming language. But I felt it's a bit verbose for the novice learners. Then LINQPad came into my mind. With LINQPad, someone can start with C# without it's verbosity. Because you can just run one statement or few statements or a standalone function with LINQPad. Again you can run a full source file also. Another thing it provide is- using SQL. So, it can be used for learning SQL too. And not to mention, it's free. So, what you guys think about the idea of introducing programming with C# via LINQPad? Any thing to watch out? Any suggestion?

    Read the article

  • What's Microsoft's strategy on Windows CE development?

    - by Heinzi
    Lots of specialized mobile devices use Windows CE or Windows Mobile. I'm not talking about smart phones here -- I know that Windows Phone 7 is Microsoft's current technology of choice here. I'm talking about barcode readers, embedded devices, industry PDAs with specialized hardware, etc... the kind of devices (Example 1, Example 2) where Windows Phone Silverlight development is not an option (no P/Invoke to access the hardware, etc.). Since direct Compact Framework support has been dropped in Visual Studio 2010, the only option to develop for these device currently is to use outdated development tools (VS 2008), which already start to cause trouble on modern machines (e.g. there's no supported way to make the Windows Mobile Device Emulator's network stack work on Windows 7). Thus, my question is: What are Microsoft's plans regarding these mobile devices? Will they allow native applications on Windows Phone, such that, for example, barcode reader drivers can be developed that can be accessed in Silverlight applications? Will they re-add "native" Compact Framework support to Visual Studio and just haven't found the time yet? Or will they leave this niche market?

    Read the article

  • Battling Emacs Pinky?

    - by haziz
    My problem is not so much emacs pinky as much as having to work with multiple machines, across 3 operating systems, both desktop and laptop, with differing keyboard layouts and different locations for Ctrl and Alt\Meta keys so I often have to pause and think about where is the Ctrl key on this machine. How do you deal with varying keyboard layouts, between Mac keyboards (mostly the laptops) and PC keyboards (mostly 101 keys in my case, yes the original PC keyboard)? I have turned the Caps lock Key into a Ctrl key (losing the Caps lock function completely rather than swapping with Ctrl) on most of them but still find myself hunting for the original Ctrl labeled key most of the time. How do you deal with this keyboard confusion? Suggestions, ideas and feedback welcome.

    Read the article

  • Questioning pythonic type checking

    - by Pace
    I've seen countless times the following approach suggested for "taking in a collection of objects and doing X if X is a Y and ignoring the object otherwise" def quackAllDucks(ducks): for duck in ducks: try: duck.quack("QUACK") except AttributeError: #Not a duck, can't quack, don't worry about it pass The alternative implementation below always gets flak for the performance hit caused by type checking def quackAllDucks(ducks): for duck in ducks: if hasattr(duck,"quack"): duck.quack("QUACK") However, it seems to me that in 99% of scenarios you would want to use the second solution because of the following: If the user gets the parameters wrong then they will not be treated like a duck and there will be no indication. A lot of time will be wasted debugging why there is no quacking going on until the user finally realizes his silly mistake. The second solution would throw a stack trace as soon the user tried to quack. If the user has any bugs in their quack() method which cause an AttributeError then those bugs will be silently swallowed. Once again time will be wasted digging for the bug when the second solution would simply give a stack trace showing the immediate issue. In fact, it seems to me that the only time you would ever want to use the first method is when: The block of code in question is in an extremely performance critical section of your application. Following the principal of "avoid premature optimization" you would only realize this of course, after you had implemented the safer approach and found it to be a bottleneck. There are many types of quacking objects out there and you are only interested in quacking objects that quack with a very specific set of arguments (this seems to be a very rare case to me). Given this, why is it that so many people prefer the first approach over the second approach? What is it that I am missing? Also, I realize there are other solutions (such as using abcs) but these are the two solutions I seem to see most often for the basic case.

    Read the article

  • Addressing a variable in VB

    - by Jeff
    Why doesn't Visual Basic.NET have the addressof operator like C#? In C#, one can int i = 123; int* addr = &i; But VB has no equivalent counter part. It seems like it should be important. UPDATE Since there's some interest, Im copying my response to Strilanc below. The case I ran into didnt necessitate pointers by any means, but I was trying to trouble shoot a unit test that was failing and there was some confusion over whether or not an object being used at one point in the stack was the same object as an object several methods away.

    Read the article

  • WCF and Service Registry

    - by TK Lee
    I am about to build some WCF Services. Those services need to communicate to each others too, in some scenarios. I've done some "Google-ing" about Service Registry but can't figure out how to implement service registry with WCF; is there any other alternate? Is there any Microsoft technology available for Service Registry? I'm new to SOA and I will really appreciate any help or guidance (what and where should I exactly look for registry services).

    Read the article

  • how to follow python polymorphism standards with math functions

    - by krishnab
    So I am reading up on python in Mark Lutz's wonderful LEARNING PYTHON book. Mark makes a big deal about how part of the python development philosophy is polymorphism and that functions and code should rely on polymorphism and not do much type checking. However, I do a lot of math type programming and so the idea of polymorphism does not really seem to apply--I don't want to try and run a regression on a string or something. So I was wondering if there is something I am missing here. What are the applications of polymorphism when I am writing functions for math--or is type checking philosophically okay in this case.

    Read the article

  • Browser-based GUI for a python application

    - by ack__
    I want to create a web/browser-based GUI for a command-line python application. The goal is to make use of HTML/JS technologies to create this GUI. As the application itself, it needs to run on Linux and Windows, and the interface will be accessible only from localhost (not exposed to internet). The GUI will contain 5 to 10 pages. I don't want a traditional desktop GUI that includes HTML/JS, but just a bunch of html files and some kind of controller between those and the application. I also want to make use of asynchronous programming (ajax like) so I can load and print data in the GUI without refreshing the whole page. I'd probably use jQuery for that and a couple other things. How would you recommend to design this? Performance is not the key here, I'm rather looking at reliability, portability and simplicity. I'm thinking of using a lightweight python HTTP server / framework (like CherryPy) and maybe later a Python templating system (at the begining it will just be a couple pages). EDIT: I'm looking for ideas/recommendations how to build this, not for alternatives to browser/web-based GUI.

    Read the article

  • Studies of Pair Programming on Translation Projects

    - by gmletzkojr
    I am looking for information (ie, studies, metrics, etc) for pair programming when translating a project from an "older" language to a "newer" language. In this particular case, translating means line for line translation where ever possible, and only modifying the design when absolutely necessary, not when the modification would provide improved performance. I have performed pair programming in new development, and I am well aware of the pros and cons of pairing in that environment. However, I haven't been able to find any information in this particular case. Any help is appreciated.

    Read the article

  • Test iPhone app on iPad mini?

    - by Devfly
    I have developed an iPhone app, right now I only need a device for testing. I have 300$, and two choices - second hand iPhone 4, or brand new iPad mini. The better choice obviously is the iPad, but is it sufficient for testing iPhone apps on? On the iPad, iPhone apps can run just fine in 2X mode, but are there any differences between the app performance on iPhone and iPad (except the chipset). Should I test my app on actual iPhone, or the iPad will suffice? My app is RSS reader, not some game, so I think everything will be fine with testing on iPad mini. If I buy the iPad I will find some friends iPhone 4/3gs running iOS 5.1 (because my app's deployment target is 5.1, and the iPad comes with 6.0), but of course I can't extensively test on this iPhone. Thank you!

    Read the article

  • Messaging technologies between applications ?

    - by Samuel
    Recently, I had to create a program to send messages between two winforms executable. I used a tool with simple built-in functionalities to prevent having to figure out all the ins and outs of this vast quantity of protocols that exist. But now, I'm ready to learn more about the internals difference between each of theses protocols. I googled a couple of them but it would be greatly appreciate to have a good reference book that gives me a clean idea of how each protocol works and what are the pros and cons in a couple of context. Here is a list of nice protocols that I found: Shared memory TCP List item Named Pipe File Mapping Mailslots MSMQ (Microsoft Queue Solution) WCF I know that all of these protocols are not specific to a language, it would be nice if example could be in .net. Thank you very much.

    Read the article

  • Strategy to find bottleneck in a network

    - by Simone
    Our enterprise is having some problem when the number of incoming request goes beyond a certain amount. To make things simpler, we have N websites that uses, amongst other, a local web service. This service is hosted by IIS, and it's a .NET 4.0 (C#) application executed in a farm. It's REST-oriented, built around OpenRasta. As already mentioned, by stress testing it with JMeter, we've found that beyond a certain amount of request the service's performance drop. Anyway, this service is, amongst other, a client itself of other 3 distinct web services and also a client for a DB server, so it's not very clear what really is the culprit of this abrupt decay. In turn, these 3 other web services are installed in our farm too, and client of other DB servers (and services, possibly, that are out of my team control). What strategy do you suggest to try to locate where the bottleneck(s) are? Do you have any high-level suggestions?

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • MIT vs. BSD vs. Dual License

    - by ryanve
    My understanding is that: MIT-licensed projects can be used/redistributed in BSD-licensed projects. BSD-licensed projects can be used/redistributed in MIT-licensed projects. The MIT and the BSD 2-clause licenses are essentially identical. BSD 3-clause = BSD 2-clause + the "no endorsement" clause Issuing a dual license allows users to choose from those licenses—not be bound to both. If all of the above is correct, then what is the point of using a dual MIT/BSD license? Even if the BSD refers to the 3-clause version, then can't a user legally choose to only abide by the MIT license? It seems that if you really want the "no endorsement" clause to apply then you have to license it as just BSD (not dual). If you don't care about the "no endorsement" clause, then MIT alone is sufficient and MIT/BSD is redundant. Similarly, since the MIT and BSD licenses are both "GPL-compatible" and can be redistributed in GPL-licensed projects, then dual licensing MIT/GPL also seems redundant.

    Read the article

  • Is the HL7 membership model normal?

    - by Peter Turner
    To me, it's a little odd that HL7 requires you to be a member to distribute the standard within your organization and in that sense implement the standard and tell others who have implemented the standard what parts you'll be implementing, especially when it's nothing classier than a few pipes and carets for 2.x and some sort of XML for 3.0. I can understand paying money to use a library to utilize HL7 or even the source code to build the library to utilize HL7. But what's the point of requiring membership to see the spec to write the sourcecode to build the library to utilize HL7?

    Read the article

  • What should I quote for a project I hope to get a job at the end of?

    - by thesunneversets
    Long story short: I applied for a (CakePHP, MySQL, etc) development job in London, UK. I grew up in Britain but am currently based quite a few thousand miles away in Canada, so I wasn't really expecting success. But quite a few emails and phone interviews later it seems that they really like me. At least to a point. Because such a major relocation would be a horrible thing to go wrong, they've sensibly suggested a trial run of getting me to build a website at a distance. I have the spec for this and it's quite a substantial amount of work. My problem is that I now need to suggest both a fee and a timescale for the job, and I haven't got any significant experience of working as a contractor. Looking at the spec, which is 1500 words of many concisely stated features, some fairly trivial and some moderately involved, I can easily imagine there being 2 weeks of intensive work there. (If everything went really well it might be closer to one week, but even though I want to impress, I definitely don't want to fall into the inexperienced-contractor trap of massively underestimating the amount of time a project will run to.) As an extra complication, there is no expectation that I should give up my day job to get this trial project done, so the hours will have to be clawed from evenings and weekends. I don't want to overcommit to a quick delivery date, only to find myself swiftly burning out due to an unrealistic workload. So, any advice for me? My main question is, what is a realistic hourly figure to demand of a stable but not excessively wealthy London-based company in the current market, bearing in mind that I'd like them to hire me afterwards? But any more general recommendations based on my circumstances above would be much appreciated too. Many thanks!

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >