Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 243/563 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Do you use your personal laptop for work?

    - by davekaro
    We're trying to get our company to let us use our own personal laptop for client work. We've agreed that any code/data will be encrypted using something like TrueCrypt, in case the laptop is stolen or lost. However, the company is still skeptical and not sure they want to allow us to use our personal machines for development. They would rather buy us laptops... but we want to use MacBook Pros and they don't want to pay for them. Even if they did buy us laptops, we would stil have the issue of needing to encrypt the code/data in case of theft/loss. Do you use your own laptop for work? What are the arguments for/against this?

    Read the article

  • How much effort should you put into a junior developer?

    - by Crazy Eddie
    At what point should one give up? I've tried helping them out by having them shadow me. We agree to break a minute, and then they go missing in action for a while...then just go back to their desk. Even when I know they've done this, part of me feels like I shouldn't have to go get them but that they should be showing interest in learning. Frankly, it's a bunch of time I don't have explaining things as I go when I could just do it. Am I expecting too much to expect that if they want to learn they'll make sure I know they're ready and willing? They go to meetings that they where not told they had to, good, but then sit in the corner and sleep...bad. I don't even know what to do with that. Sometimes I give them something small to do and they do it great, so I give them something just a touch harder and they totally fail, hard. Check in things without testing them. Part of me thinks that maybe I should be spending more time with them but at the same time I don't see a lot of interest and I really, honestly don't have time teaching the same things over and over. Sometimes I get asked questions that are really, really easy to answer if you just do a little bit of your own work trying to find out. Other times I'm not asked anything. I'm sure I could be doing better but honestly...I don't really want to anymore.

    Read the article

  • C# inherit from a class in a different DLL

    - by Onno
    I need to make an application that needs to be highly modular and that can easily be expanded with new functionality. I've thought up a design where I have a main window and a list of actions that are implemented using a strategy pattern. I'd like to implement the base classes/interfaces in a DLL and have the option of loading actions from DLL's which are loaded dynamically when the application starts. This way the main window can initiate actions without having to recompile or redistribute a new version. I just have to (re)distribute new DLL's which I can update dynamically at runtime. This should enable very easy modular updating from a central online source. The 'action' DLL's all inherit their structure from the code defined in the the DLL which defines the main strategy pattern structure and it's abstract factory. I'd like to know if C# /.Net will allow such a construction. I'd also like to know whether this construction has any major problems in terms of design.

    Read the article

  • How does the "Fourth Dimension" work with arrays?

    - by Questionmark
    Abstract: So, as I understand it (although I have a very limited understanding), there are three dimensions that we (usually) work with physically: The 1st would be represented by a line. The 2nd would be represented by a square. The 3rd would be represented by a cube. Simple enough until we get to the 4th -- It is kinda hard to draw in a 3D space, if you know what I mean... Some people say that it has something to do with time. The Question: Now, that is all great with me. My question isn't about this, or I'd be asking it on MathSO or PhysicsSO. My question is: How does the computer handle this with arrays? I know that you can create 4D, 5D, 6D, etc... arrays in many different programming languages, but I want to know how that works.

    Read the article

  • Is Haskell worth learning?

    - by Jason K
    I am looking at this question primarily from a career point of view, so I hope you answer it accordingly. I am fairly proficient with Python, can write C++ and I am a final year student of computer science engineering I am looking to learn Haskell because I have heard a lot about it. My question is: apart from learning it because of all the good I have heard about it, is it any good for my career? Is it used in the industry? I am curious to learn it but unless it helps me somehow in my career, I am not willing to make that change at this stage. Looking for some personal experiences here. Thanks!

    Read the article

  • What are the advantages of programming to under an OS as opposed to bare metal executive?

    - by gby
    Assume you are presented with an embedded system application to program, in C, on a multi-core environment (think a Cavium or Tilera) and need to choose between two environments: Code the application under Linux in SMP mode or code the application under a thin bare metal executive (something like a very minimal RTOS), perhaps with a single core running UP Linux that can serve control tasks. For the purpose of this question, assume that both environment provide the same level of performance guarantees in any measurable aspects of run time performance, including number of meaningful action per second, jitter, latency, real time considerations - the works. (and yes, I realize this is by far not a trivial assumption at all, bare with me). How would you justify going with a Linux SMP based solution rather then a bare metal thin executive solution? The question may seems silly. It certainly seems obvious to me - but I have to convince someone that does not think the same. Could you help make a list of arguments in favor of choosing a real SMP aware OS (Linux) vs. a bare metal executive assuming performance guarantees are NOT an issue? Many thanks

    Read the article

  • Do you need to know Java before trying Scala

    - by gizgok
    I'm interested in learning Scala. I've been reading a lot about it, but a lot of people value it because it has an actor model which is better for concurrency, it handles xml in a much better way, solves the problem of first class functions. My question is do you need to know Java to understand/appreciate the way things work in Scala? Is it better to first take a stab at Java and then try Scala or you can start Scala with absolutely no java backround?

    Read the article

  • How are you using the Managed Extensibility Framework?

    - by dboarman
    I have been working with MEF for about 2 weeks. I started thinking about what MEF is for, researching to find out how to use MEF, and finally implementing a Host with 3 modules. The contracts are proving to be easy to grasp and the modules are easily managed. Although MEF has a very practical use, I am wondering to what extent? I mean, is everyone going to be rewriting existing applications for extensibility? Yes, that sounds, and is insanely impractical. Rhetorically speaking: how is MEF affecting the current trends in programming? have you begun looking for opportunities to use MEF? have you begun planning a major re-write of an existing app that may benefit from extensibility? That said, my questions are: how do I know when I should plan a new project with extensibility? how will I know if an existing project needs to be re-written for extensibility? Is anyone using MEF?

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • How do you achieve a numeric versioning scheme with Git?

    - by Erlend
    My organization is considering moving from SVN to Git. One argument against moving is as follows: How do we do versioning? We have an SDK distribution based on the NetBeans Platform. As the svn revisions are simple numbers we can use them to extend the version numbers of our plugins and SDK builds. How do we handle this when we move to Git? Possible solutions: Using the build number from hudson (Problem: you have to check hudson to correlate that to an actual git version) Manually upping the version for nightly and stable (Problem: Learning curve, human error) If someone else has encountered a similar problem and solved it, we'd love to hear how.

    Read the article

  • What would you say to a bunch of software engineering students on their first day at college?

    - by Álvaro
    Next Friday I'm giving a short (30 min.) talk to a bunch of software engineering students who will be attending the same university I did. Some context: The place is Montevideo, Uruguay The university is Universidad de la República (public, free university) The Software Engineering programme takes 5 years (if you're very good and don't start working early). Around 800 new students per year, around 80 graduates per year. Conditions are harsh, particularly the first two years. Most of them probably have no idea what software engineering or programming is. My goal would be to somehow give them an idea of the field and hopefully motivate them to endure the hardships ahead to eventually become successful developers. So the question is: what would you tell these people?

    Read the article

  • How to coach a developer with dyslexia to improve his spelling and grammar capabilities?

    - by Uwe Keim
    Just having read this question regarding developers with dyslexia, I still have some open questions on how to deal with it: I am working on a project sinc approx. 6 month with a new developer who just finished university. I see that the code quality is high, what he's missing is the ability to write texts (even short ones) in an error-free manner (both, syntax and grammar errors). He is working on some UI stuff (VS.NET 2010, ASP.NET 4) and, beside coding, has to write short text for labels, buttons, grid view headers, page titles, etc. Since even those texts have errors inside, no matter how much I try to discuss the need for a professional, text-error-free UI, he seems to not manage to get this right, although he really tries. So my questions are: Are there any hints how he (or I) should proceed to enhance the text quality? Do you know any tools (like inline spell checkers) for VS.NET to highlight syntax and grammar errors? (We are working on a German-only UI, if this is important to know)

    Read the article

  • Does it help to be core programmer of a product (product meant for social good ) for getting into Ph. D. in top university in USA say top 20?

    - by Maddy.Shik
    Hey i am working upon a product as core developer which will be launched in USA market in few months if successful. Can this factor improve my chance to get Ph.D. in good university(say top 20 in US). Normally good universities like CMU, standford, MIT, Cornell are more interested in student's profile like research work, under graduate school etc. I am not passed out from very good university its ranked in top 20 of India only. Neither did i do research work till now. But being one of founding member of company and developing product for same, i want to know if this factor can help and to what extent. For university with ranking lower than 20 what matters most is GRE General score and GPA but i guess top university must be appreciating a person's real efforts.

    Read the article

  • Opensource, noncommercial License ?

    - by nick-russler
    Hey, i want to publish my software under a opensource license with the following conditions: you are allowed to: Share — to copy, distribute and transmit the work use a modified version of the code in your application you are not allowed to: publish modified versions of the code use the code in anything commercial is there a software license out there that fits my needs ? (crosspost: http://stackoverflow.com/questions/4558546/opensource-noncommercial-license)

    Read the article

  • Hints to properly design UML class diagram

    - by mic4ael
    Here is the problem. I have just started learning UML and that is why I would like to ask for a few cues from experienced users how I could improve my diagram because I do know it lacks a lot of details, it has mistakes for sure etc. Renovation company hires workers. Each employee has some kind of profession, which is required to work on a particular position. Workers work in groups consisting of at most 15 members - so called production units, which specializes in a specified kind of work. Each production unit is managed by a foreman. Every worker in order to be able to perform job tasks needs proper accessories. There are two kind of tools - light and heavy. To use heavy tools, a worker must have proper privileges. A worker can have at most 3 light tools taken from the warehouse.

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

  • Using automated bdd-gui-tests to keep user-documentation-screenshots up do date?

    - by k3b
    Are there developpers out there, who (ab)use the CaptureScreenshot() function of their automated gui-tests to also create uptodate-screenshots for the userdocumentation? Background: Whithin the lifetime of an application, its gui-elements are constantly changing. It makes a lot of work to keep the userdocumentation uptodate, especially if the example data in the pictures should match the textual description. If you already have automated bdd-gui-tests why not let them take screenshots at certain points? I am currently playing with webapps in dotnet+specflow+selenium, but this topic also applies to other bdd-engines (JRuby-Cucumber, mspec, rspec, ...) and gui-test-Frameworks (WaitN, WaitR, MsWhite, ....) Any experience, thoughts or url-links to this topic would be helpfull. How is the cost/benefit relation? Is it worth the efford? What are the Drawbacks? See also: Is it practical to retroactively write specifications documenting a system via automated acceptance tests?

    Read the article

  • Is Haskell's type system an obstacle to understanding functional programming?

    - by Eric Wilson
    I'm studying Haskell for the purpose of understanding functional programming, with the expectation that I'll apply the insight that I gain in other languages (Groovy, Python, JavaScript mainly.) I choose Haskell because I had the impression that it is very purely functional, and wouldn't allow for any reliance on state. I did not choose to learn Haskell because I was interested in navigating an extremely rigid type system. My question is this: Is a strong type system a necessary by-product of an extremely pure functional language, or is this an unrelated design choice particular to Haskell? If it is the latter, I'm curious what would be the most purely functional language that is dynamically typed. I'm not particularly opposed to strong typing, it has its place, but I'm having a hard time seeing how it benefits me in this educational endeavor.

    Read the article

  • Real-time chat in Ruby on Rails

    - by Skydreamer
    First, I'm sorry because I know this question has been asked many times but I'm still looking forward to finding the answer to my problem. I'd want to implement a Real-time chat for my Rails app but I can't really host the server which handles the sockets. I've tried Faye but it needs a server. I've also heard of pusher but it's limited to 20 users at a time on the chat and I can't really be sure they won't be more. I've thought of irc but I think I can't really embed it into a rails app, maybe it needs sockets... So here's my problem, can I implement a real-time chat without owning a server ? What can you advice me ? Thank you for your answers.

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • Erlang web frameworks survey

    - by Zachary K
    (Inspired by similar question on Haskel) There are several web frameworks for Erlang like Nitrogen, Chicago Boss, and Zotonic, and a few more. In what aspects do they differ from each other? For example: features (e.g. server only, or also client scripting, easy support for different kinds of database) maturity (e.g. stability, documentation quality) scalability (e.g. performance, handy abstraction) main targets Also, what are examples of real-world sites / web apps using these frameworks?

    Read the article

  • Popular programming books which have been translated into Russian

    - by arikfr
    I'm looking for recommendations of popular programming books that have been translated into Russian. I'm talking about books like: Test-Driven Development by Example by Kent Beck Code Complete The Pragmatic Programmer And other books like them. Also, recommendations for books in Russian by other authors but about similar topics (TDD, BDD, general programming methodologies) will be appreciated.

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Making dummy applications while not involved in LIVE work [closed]

    - by Ratan Sharma
    I know this is subjective but I am looking for some real time helpful points/advice here, which will be helpful for some to get motivated. In our company so many people are on bench(not assigned with real time work) and they do not want to experiment things by their own. What would be a good motivation for them to keep their learning spirit? I personally feel that one can learn and give more effort in live client work than regular practicing things and making dummies. Am I right here or it is just my thinking only?

    Read the article

  • Searching for a key in a multi dimensional array and adding it to another array [migrated]

    - by Moha
    Let's say I have two multi dimensional arrays: array1 ( stuff1 = array ( data = 'abc' ) stuff2 = array ( something = '123' data = 'def' ) stuff3 = array ( stuff4 = array ( data = 'ghi' ) ) ) array2 ( stuff1 = array ( ) stuff3 = array ( anything = '456' ) ) What I want is to search the key 'data' in array1 and then insert the key and value to array2 regardless of the depth. So wherever key 'data' exists in array1 it gets added to array2 with the exact depth (and key names) as in array1 AND without modifying any other keys. How can I do this recursively?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >