Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 157/663 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • How can a large, Fortran-based number crunching codebase be modernized?

    - by Dave Mateer
    A friend in academia asked me for advice (I'm a C# business application developer). He has a legacy codebase which he wrote in Fortran in the medical imaging field. It does a huge amount of number crunching using vectors. He uses a cluster (30ish cores) and has now gone towards a single workstation with 500ish GPUS in it. However where to go next with the codebase so: Other people can maintain it over next 10 year cycle Get faster at tweaking the software Can run on different infrastructures without recompiles After some research from me (this is a super interesting area) some options are: Use Python and CUDA from Nvidia Rewrite in a functional language. For example, F# or Haskell Go cloud based and use something like Hadoop and Java Learn C What has been your experience with this? What should my friend be looking at to modernize his codebase? UPDATE: Thanks @Mark and everyone who has answered. The reasons my friend is asking this question is that it's a perfect time in the projects lifecycle to do a review. Bringing research assistants up to speed in Fortran takes time (I like C#, and especially the tooling and can't imagine going back to older languages!!) I liked the suggestion of keeping the pure number crunching in Fortran, but wrapping it in something newer. Perhaps Python as that seems to be getting a stronghold in academia as a general-purpose programming language that is fairly easy to pick up. See Medical Imaging and a guy who has written a Fortran wrapper for CUDA, Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)?.

    Read the article

  • planning the same app for both OSX and iOS

    - by P5music
    I would like to ask which is the best strategy for creating an application that will be developed both on Mac and iPad, so to make minumum effort to port it from one platform to the other, starting from iPad, for example, but rather trying to make both at the same time. The application, in fact, would be an iPad-style one on the Mac too. How should I have to plan the project? Which are the main tricks to easily get the goal?

    Read the article

  • The best Drupal and JavaScript developer?

    - by hakanito
    I've read a lot of JS articles and books by Nicholas Zakas and Addy Osmani, in my opinion evangelists in the field. But I am also a Drupal developer, and these guys are not. Many of the techniques they're talking about such as AMD and RequireJS are great, but it's hard to know how to integrate them when it comes to Drupal (and do it right, ofc). So my question is if there are any recognized developer/s out there with strong JavaScript AND Drupal experience?

    Read the article

  • How to choose a job? [closed]

    - by Aadi Droid
    When given multiple opportunities from various software giants, as a fresher out of school how should one decide which company to go for? Just as an example I have offers from two companies, won't name them but the two biggest dream companies for any SDE. While one company offers tremendous learning opportunites and and a good pay but coupled with really bad employee support and perks. While the other offers a relaxed work environment where learning happens by choice, with a slightly lower pay but amazing employee facilities and perks. Assuming the fresher has a plan to go for his masters degree in two years what are the most important things he should be looking at?

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • New cloud development workflow using Github, Cloud9ide and CloudFoundry.

    - by weng
    So time is changing towards cloud development/computing. I'm trying to get the new "cloud" workflow based on the services I'm going to use: Github, Cloud9ide and CloudFoundry. Here is what is on my mind: Github acts like a central (main repo) just like yesterday's local filesystem. Every service will base it service upon this main repo. Workflow: Github: I create a new Github repo served as main repo for the project. Cloud9ide. I open my Github repo and write my tests and implementation (BDD/TDD). When I'm ready I save (commit) it to main repo on Github. X: A running instance of Jenkins detects someone has committed and fetches the latest commit, builds, deploys, tests (yeti and/or selenium) and reports if the tests were passed or not. If not, I make another commit til all tests are passing. X: I run the CloudFoundry commands to push the main Github repo to CloudFoundry's server and it will deploy my app automatically. What I'm still confused about is where this X environment will be. On a local server where I have to install Jenkins? Or could I install it on Cloud9ide (when java is supported) or will it be on another cloud service? Also, that X environment has to be able to fetch (clone) the Github repo and run the build scripts. And since the concept of Cloud9ide is very new and there haven't been any other predecessors I really wonder how the workflow will look like. We all know Github's workflow. We now know CloudFoundry's workflow (deploy/scale with a restful API/command line tool). But how Cloud9Ide will operate is still somewhat unclear to me. Someone on Cloud9ide mentioned that there will be buttons like deploy so I can deploy with one click. But that I guess will depend on what services that deploy process will hook up into etc. Could someone enlighten this cloud workflow topic and fill in the gaps. Thanks.

    Read the article

  • When should I make the first commit to source control?

    - by Kendall Frey
    I'm never sure when a project is far enough along to first commit to source control. I tend to put off committing until the project is 'framework-complete' and primarily commit features from then on. (I haven't done any personal projects large enough to have a core framework too big for this.) I have a feeling this isn't best practice, though I'm not sure what all could go wrong. Let's say, for example, I have a project which consists of a single code file. It will take about 10 lines of boilerplate code, and 100 lines to get the project working with extremely basic functionality (1 or 2 features). Should I first check in: The empty file? The boilerplate code? The first features? At some other point? Also, what are the reasons to check in at a specific point?

    Read the article

  • tail-like view on HTML logfiles

    - by h0b0
    I'm working on an application that creates HTML log files. I'm tired of having to manually reload and scroll to the bottom in the browser to see the latest entries. A solution that does not really satisfy me is using the Firefox plugins ReloadEvery and ScrollyFox. In many situations reloading frequency and scrolling speed are just to slow. Of course I could actually use tail, but I would prefer a rendered HTML page. Do you have any suggestions? Firefox extensions are preferred, but any other tip is appreciated, too.

    Read the article

  • Can an agile shop really score 12 on the Joel Test?

    - by Simon
    I really like the Joel test, use it myself, and encourage my staff and interviewees to consider it carefully. However I don't think I can ever score more than 9 because a few points seem to contradict the Agile Manifesto, XP and TDD, which are the bedrocks of my world. Specifically: the questions about schedule, specs, testers and quiet working conditions run counter to what we are trying to create and the values that we have adopted in being genuinely agile. So my question is: is it possible for a true Agile shop to score 12?

    Read the article

  • Ashamed to admit using jQuery?

    - by Matt Stevens
    Something I've noticed over the past few weeks is how many big commercial websites use jQuery combined with lots of plugins - but don't admit it. They will rename the main library to something obscure, as well as the plugins. Quite a few will even remove the comments that contain the MIT/GPL license information. (just noticed today that odeon.co.uk have done exactly this) Why are they doing this? are they abashed by the face that they are using a free and open source library?

    Read the article

  • iPhone App Store Release Question!

    - by Ahmad Kayyali
    I am developing an Application its purpose to view uploaded files on the host server, and it has a credentials that will be entered on the Login Page to authenticate the user. My Question! when I post my application to the App Store how suppose apple is going to test or at least view my application when Apple needs enter a valid credentials that I am not suppose to know, it's private to my client. Any guide would be greatly appreciated.

    Read the article

  • Design Patterns: Should I learn them?

    - by prelic
    So it's kinda weird asking two questions back-to-back, but they aren't very related and I didn't want to combine them, but I'm not spamming questions, I promise! Anyway, I'm a recent college grad, and my education only touched on design patterns...we implemented a few simple ones, touched on the fact that there were more complicated ones, and were instructed to turn to the GoF book if we wanted to learn more. My question is, is it worth learning the patterns in the GoF book? To me, it's always seemed counter-intuitive to try and make a problem fit a classic pattern, but obviously the book, and the patterns, are famous for a reason. Do they show up enough that I should be learning them? Thanks again!

    Read the article

  • When you are expecting a promoting, do you prefer an technical or administrative job? [closed]

    - by Darf Zon
    As a programmer, they offered me an upgrade as project manager, but my feeling is that I can have a more effective contribution in a technical role that in one administrative. When should I accept the promotion? Generally speaking, I think that people should do what they love and what they like to do, from the time you are offered a promotion to someone is because he has been doing a great job today, and certainly learn new things in the new position and obviously have a better financial remuneration, but if it really is something you do not like do not good that post. That's my opinion.

    Read the article

  • Ruby workflow in Windows

    - by Rig
    I've done some searching and quite haven't come across the answer I am looking for. I do not think this is a duplicate of this question. I believe Windows could be a suitable development environment based on the mix of answers in that question. I have been developing in Ruby (mostly Rails but not entirely) for about a year now for personal projects on a Macbook Pro however that machine has faced an untimely death and has been replaced with a nice Windows 7 machine. Ruby development felt almost natural on the Mac after doing some research and setting up the typical stack. My environment then included the standard (Linux like) stuff built into OSX, Text Wrangler, Git, RVM, et al. Not too much of a deviation from what the 'devotees' tend to assume. Now I am setting up my new Windows box for continuing that development. What would my development environment look like? Should I just cave and run Linux in a VM? Ideally I would develop in Windows native. I am aware of the Windows Ruby installer. It seems decent but its not exactly as nice as RVM in the osx/linux world. Mercurial/Git are available so I would assume they play into the stack. Does one develop entirely in Windows? Does one run a webserver in a Linux VM and use it as a test bed while developing in Windows? Do it all in a VM? What does the standard Windows Ruby developer environment look like and what is the workflow? What would a typical step through be for adding a new feature to an ongoing project and what would the technology stack look like?

    Read the article

  • Is hashing of just "username + password" as safe as salted hashing

    - by randomA
    I want to hash "user + password". EDIT: prehashing "user" would be an improvement, so my question is also for hashing "hash(user) + password". If cross-site same user is a problem then the hashing changed to hashing "hash(serviceName + user) + password" From what I read about salted hash, using "user + password" as input to hash function will help us avoid problem with reverse hash table hacking. The same thing can be said about rainbow table. Any reason why this is not as good as salted hashing?

    Read the article

  • Should a domain expert make class diagrams?

    - by Matthieu
    The domain expert in our team uses UML class diagrams to model the domain model. As a result, the class diagrams are more of technical models rather than domain models (it serves of some sort of technical specifications for developpers because they don't have to do any conception, they just have to implement the model). In the end, the domain expert ends up doing the job of the architect/technical expert right? Is it normal for a domain expert (not a developer or technical profile) to do class diagrams? If not, what kind of modeling should he be using?

    Read the article

  • Are project managers useful in Scrum?

    - by Martin Wickman
    There are three roles defined in Scrum: Team, Product Owner and Scrum Master. There is no project manager, instead the project manager job is spread across the three roles. For instance: The Scrum Master: Responsible for the process. Removes impediments. The Product Owner: Manages and prioritizes the list of work to be done to maximize ROI. Represents all interested parties (customers, stakeholders). The Team: Self manage its work by estimating and distributing it among themselves. Responsible for meeting their own commitments. So in Scrum, there is no longer a single person responsible for project success. There is no command-and-control structure in place. That seems to baffle a lot of people, specifically those not used to agile methods, and of course, PM's. I'm really interested in this and what your experiences are, as I think this is one of the things that can make or break a Scrum implementation. Do you agree with Scrum that a project manager is not needed? Do you think such a role is still required? Why?

    Read the article

  • Learning to be a good developer: what parts can you skip over?

    - by Andrew M
    I have set myself the goal of becoming a decent developer by this time next year. By this I mean full experience of the development 'lifecycle,' a few good apps/sites/webapps under my belt, and most importantly being able to work at a steady pace without getting sidelined for hours by some should-know-this-already technique. I'm not starting from scratch. I've written a lot of html/css, SQL, javascript, python and VB.net, and studied other languages like C and Java. I know about things like OOP, design patterns, TDD, complexity, computational linguistics, pointers/references, functional programming, and other academic/theoretical matters. It's just I can't say I've really done these things yet. So I want to get up to speed, and I want to know what things I can leave till a later date. For instance, studying algorithms and the maths behind them is interesting and all, but so far I've hardly needed to write anything but the most basic nested loops. Investigating Assembly to have a clearer picture of low-level operations would be cool... but I imagine rarely infringes on daily work. On the other hand, looking at a functional programming language might help me write programs that are more comprehensible and less prone to hidden failures (at the moment I'm finding the biggest difficulty is when the complexity of the app exceeds my capacity to understand it - for instance passing data around was fine... until I had to start doing it with AJAX, which was a painful step up). I could spend time working through case studies of design patterns, but I'm not sure how many of them get used in 'real life.' I'm a programmer with basic abilities - what skills should I focus on developing? (also my Unix skills are very weak, and also knowledge of Windows configuration... not sure how much time I should spend on that)

    Read the article

  • What are the consequences of immutable classes with references to mutable classes?

    - by glenviewjeff
    I've recently begun adopting the best practice of designing my classes to be immutable per Effective Java [Bloch2008]. I have a series of interrelated questions about degrees of mutability and their consequences. I have run into situations where a (Java) class I implemented is only "internally immutable" because it uses references to other mutable classes. In this case, the class under development appears from the external environment to have state. Do any of the benefits (see below) of immutable classes hold true even by only "internally immutable" classes? Is there an accepted term for the aforementioned "internal mutability"? Wikipedia's immutable object page uses the unsourced term "deep immutability" to describe an object whose references are also immutable. Is the distinction between mutability and side-effect-ness/state important? Josh Bloch lists the following benefits of immutable classes: are simple to construct, test, and use are automatically thread-safe and have no synchronization issues do not need a copy constructor do not need an implementation of clone allow hashCode to use lazy initialization, and to cache its return value do not need to be copied defensively when used as a field make good Map keys and Set elements (these objects must not change state while in the collection) have their class invariant established once upon construction, and it never needs to be checked again always have "failure atomicity" (a term used by Joshua Bloch) : if an immutable object throws an exception, it's never left in an undesirable or indeterminate state

    Read the article

  • Is Unix/Linux adoption a measure of developer skill? [closed]

    - by adeptsage
    Is Linux/Unix based OS adoption for development (or recreation) a rough measure of the awareness, adaptability and/or skill of a developer, after considering factors like learning curve for effective code testing via the CLI, basic scripting, tweaking, etc. I have no intention of starting flame wars between Windows and Linux as better dev environments, and personal choice of OSes for dev, but what i do wish to know is that - will the level of adaptability of a person who uses a Unix based system be defined by the fact that he programs on a Unix based OS?

    Read the article

  • What's the better user experience: Waiting once at startup for a long time or waiting frequently for a short time?

    - by Roflcoptr
    I'm currently design an application that involves a lot of calculation. Now I have generally two possibilities which I have both tested: 1) During startup of the application I calculated only the most important values and these values that consume a lot of time. So the user has to wait approximately 15 seconds during startup. But on the other hand a lot of user interactions require recalculation so that the user often has to wait 2-3 seconds after clicking somewhere until the application has calculated and loaded all values 2) I load everything during startup. This takes from 90 to 120 seconds... This is quite a long time, but the big advantage is that all the user interactions are executed immediately. So what would you generally consider the better approach? Loading all time-consuming operations during startup or when needed?

    Read the article

  • Security in a private web service

    - by Oni
    I am developing a web site and a web service for a small on-line game. Technically, I'll be using Express (node.js) and MongoDB+Redis for the databases. This the structure I came up with: One Express server that will server as the Web Service. This will connect to the databases. One Express server that will provide the web site. It will connect to the Web Service to retrieve and push the information. iOS and Android application will be able to interact with the WebService. Taking into account: It is a small game. The information transferred is not critical. There will NOT be third party applications. At least for the moment. My concern is about which level of security I should use in each of the scenarios: Security of the user playing through web browser Security of the applications and the Web Server connecting to the WS. I have take a look at the different options and: OAuth and/or Https is too much for this scenario, isn't it? Will be a good option to hash the user and password with MD5(or similar) and some salt? I would like to get some directions and investigate by my own rather than getting a response like "you should you use this node.js module..." Thanks in advance,

    Read the article

  • Architecture design with MyBatis mappers

    - by Wolf
    I am creating rest web service for providing data. I am using Spring MVC for handling rest requests, and MyBatis for data access. Application should be designed in the way that it should be easy to change the data access implementation (for example to hibernate or something else) and it has to be fast (so I am trying to avoid unnecessary overcomplication of design). Now my question is about the general design of layers. I would normally use DAO interface and then different implementations for different data access strategies, but MyBatis uses interfaces to access the data. So I can think of 2 possible models but I am not sure which one is better or if there is any other nice way: Controller layer - uses Service layer interfaces services are then implemented for each data access stretegy - for example for mybatis: service implementation uses Mapper classes to access data and do whatever it needs to do with them and sends them to controller layer Controller layer - uses Service layer - service layer uses DAO interfaces DAOs are then implemented for each data access strategy - for example for mybatis: DAO class uses mapper interface to access data and sends them to service layer, service layer then do whatever it needs to do with them and sends them to controller layer I prefer the first strategy as it seems to be less complicated, but then I would have to write all of the service code for another data access again. What do you think? Thank You

    Read the article

  • Microsoft Terminology: .NET C++ vs. traditional C++

    - by Mike Clark
    I've recently been working with a team that's using both .NET C++ and pre-.NET C++. I fully understand the technical differences between the two technologies. However, I sometimes feel like I'm floundering when it comes to the terminology used to differentiate the two. Example: Say we have two projects: ProjectA contains "C++" code that builds a .NET assembly DLL. ProjectB contains Visual C++ code that builds a traditional native Windows DLL. What is the best way to succinctly and terminologically draw a distinction between the two projects? Again, I'm not asking for an in-depth technical description of the differences between the two technologies. I'm just looking for names and labels. This is how, today, I might try to make the distinction when talking to someone: "ProjectA is a managed .NET C++ project" and "ProjectB is an unmanaged native C++ DLL project." However I am not at all certain that this terminology is ideal, or even correct. Please describe what you feel the ideal language to use in this situation (or similar situations) might be. Feel free to motivate your answer.

    Read the article

  • Dictionary as DataMember in WCF after installing .NET 4.5 [migrated]

    - by Mauricio Ulate
    After installing .NET Framework 4.5 with Visual Studio 2012, whenever I want to obtain the reference from a WCF service, my dictionaries are changed into arrays. For example, Dictionary<int, double> is changed into ArrayOfKeyValueOfintdoubleKeyValueOfintdouble. This happens in both Visual Studio 2012 and 2010 (both Express). I've reviewed my configuration and the dictionary data type in the service reference configuration is System.Collection.Generic.Dictionary. Changing this doesn't make a difference. Reverting to just using Visual Studio 2010 and .NET 4.0 is not an option.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >