Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 143/663 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • If not gamedev, what do I do !

    - by brainydexter
    Hi, I am a game dev who was working in the game-industry and then..got laid off. Ever since then, life couldn't get less stressful! During this time, I have met so many other devs who have also been laid off irrespective of the number of years they have been in the game. Now, the problem really gets worse, since I am not a US citizen (yes, I am in US) , and am on an international visa here, I might have to soon pack my bags and go back to my native country. Going back is not bad at all, apart from the fact, that gamedev is still in a very nascent stage there. There just aren't many opportunities. So, employment is the key to maintain a valid visa status. After giving it a lot of thought, I am thinking of staying away from gamedev jobs for the time being, given its job unstablity. This brings me to my current problem. I can't think of a domain/place where I can use my game development skills. I know graphics/simulation/visualization is huge, but I can't think straight and am left clueless where to go from here. What are some of the domains/companies where I can use my skills ? I'd appreciate any insight on this (and I apologize if this is not the place to post this kind of a question).

    Read the article

  • Multithreded UI desktop application issues

    - by igor
    I am involved into development a rich UI project: desktop windows application. Application uses asynchronous invocations and in its turn it should be ready to process external messages (events). The problem is clear: at first time it was built as a simple prototype and it was not stress tested and all was fine. Then application was grown: the number of calls to server and number of events from server are high and performance is low. What is more users noticed that sometimes performance is extremal low. Asynchronous invocations based on thread pool (BeginInvoke, EndInvoke), external events are going from WCF service (.NET 3.5). My goal is synchronization of all tasks and putting priorities to every executions in desktop application. My question is: is there any practice how to reach my goal: patterns, task priority list, others? What should I do at first, second and next times? Thanks

    Read the article

  • Python Coding standards vs. productivity

    - by Shroatmeister
    I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)

    Read the article

  • Web Frameworks caring about persistence?

    - by Mik378
    I have noticed that Play! Framework encompasses persistence strategy (like JPA etc...) Why would a web framework care about persistence ?! Indeed, this would be the job of the server-sides components (like EJB etc...), wouldn't this? Otherwise, client would be too coupled with server's business logic. UPDATE: One answer would be : it's more likely used for simple application including itself the whole business logics. However, for large applications with well-designed layers(services, domain, DAO's etc..), persistence is not recommended within web client layer since there would be several different web(or not) clients.

    Read the article

  • Pros and cons of using Grails compared to pure Groovy

    - by shabunc
    Say, you (by you I mean an abstract guy, any guy in your team) have experience of writing and building java web apps, know about filters, servlet mappings and so on, and so on. Also, let us assume you know pretty well any sql db, no matter which one exactly, whether it mysql, oracle or psql. At last, let pretend we know Groovy and its standard libraries, for example all that JsonBuilder and XmlSlurper stuff, so we don't need grails converters. The question is - what are benefits of using grails in this case. I'm not trying to start flame war, I'm just asking to compare - what are ups and downs of grails development compared to pure groovy one. For instance, off the top of my head I can name two pluses - automatic DB mapping and custom gsp tags. But when I want to write a modest app which provides small API for handling some well defined set of data, I'm totally OK with groovy's awesome SQL support. As for gsp, we does not use it at all, so we are not interested in custom tags as well.

    Read the article

  • Java - learning / migrating fast

    - by Yippie-Kai-Yay
    This is not one of those questions like "How do I learn Java extremely fast, I know nothing about programming, but I heard Java is cool, yo". I have an interview for a Java Software Developer in a couple of weeks and the thing is that I think that I know C++ really good and I am somewhat good at C# (like, here I can probably answer on a lot of questions related to these languages), but I have almost zero experience with Java. I have a lot of projects written in both languages, I participiated in several open-source projects (mostly C++, though). Now, what should I do (in your opinion) to prepare myself for this Java interview. I guess migrating from C# to Java should be kind of fast, especially when you know a lot about programming in global, patterns, modern techniques and have a lot of practical experience behind you. But still two weeks is obviously not enough to get Java in-depth - so what should I focus on to have the best chances to pass the interview? Thank you.

    Read the article

  • Is it necessary to create a database with as few tables as possible

    - by Shaheer
    Should we create a database structure with a minimum number of tables? Should it be designed in a way that everything stays in one place or is it okay to have more tables? Will it in anyway affect anything? I am asking this question because a friend of mine modified some database structure in mediaWiki. In the end, instead of 20 tables he was using only 8, and it took him 8 months to do that (it was his college assignment). EDIT I am concluding the answer as: size of the tables does NOT matter, until the case is exceptional; in which case the denormalization may help. Thanks to everyone for the answers.

    Read the article

  • Questions re: Eclipse Jobs API

    - by BenCole
    Similar to http://stackoverflow.com/questions/8738160/eclipse-jobs-api-for-a-stand-alone-swing-project This question mentions the Jobs API from the Eclipse IDE: ...The disadvantage of the pre-3.0 approach was that the user had to wait until an operation completed before the UI became responsive again. The UI still provided the user the ability to cancel the currently running operation but no other work could be done until the operation completed. Some operations were performed in the background (resource decoration and JDT file indexing are two such examples) but these operations were restricted in the sense that they could not modify the workspace. If a background operation did try to modify the workspace, the UI thread would be blocked if the user explicitly performed an operation that modified the workspace and, even worse, the user would not be able to cancel the operation. A further complication with concurrency was that the interaction between the independent locking mechanisms of different plug-ins often resulted in deadlock situations. Because of the independent nature of the locks, there was no way for Eclipse to recover from the deadlock, which forced users to kill the application... ...The functionality provided by the workspace locking mechanism can be broken down into the following three aspects: Resource locking to ensure multiple operations did not concurrently modify the same resource Resource change batching to ensure UI stability during an operation Identification of an appropriate time to perform incremental building With the introduction of the Jobs API, these areas have been divided into separate mechanisms and a few additional facilities have been added. The following list summarizes the facilities added. Job class: support for performing operations or other work in the background. ISchedulingRule interface: support for determining which jobs can run concurrently. WorkspaceJob and two IWorkspace#run() methods: support for batching of delta change notifications. Background auto-build: running of incremental build at a time when no other running operations are affecting resources. ILock interface: support for deadlock detection and recovery. Job properties for configuring user feedback for jobs run in the background. The rest of this article provides examples of how to use the above-mentioned facilities... In regards to above API, is this an implementation of a particular design pattern? Which one?

    Read the article

  • Why is there a "new" keyword in Go?

    - by dystroy
    I'm still puzzled as why we have new in Go. &Thing{} is as clear and concise as new(Thing) to Go coders and it uses only constructs you often use elsewhere. It's also more extensible as you might change it to &Thing{3} or &Thing{Feets:7}. In my opinion, having a supplementary keyword is costly, it makes the language more complex and adds to what you must know. And it might mask to newcomers what's behind instantiating a struct. It also makes one more reserved word. So what's the reasoning behind new ? Is it something useful ? Should we use it ?

    Read the article

  • How to learn to draw UML sequence diagrams

    - by PeterT
    How can I learn to draw UML sequence diagrams. Even though I don't use UML much I find that type of diagrams to be quite expressive and want to learn how to draw them. I don't plan to use them to visualise a large chunks of code, hence I would like to avoid using tools, and learn how to draw them with just pen and paper. Muscle memory is good. I guess I would need to learn some basics of notation first, and then just practice it like in "take the piece of code, draw a seq. diagram visualising the code, then generate the diagram using some tool/website, then compare what I'd drawn to what the tool result. Find the differences, correct them, repeat.". Where do I start? Can you recommend a book or a web site or something else?

    Read the article

  • Need recommendation for transferring ASP.NET MVC skills to PHP

    - by Tuck
    I am looking to translate my skills in .NET to PHP - specifically in regards to ASP.NET MVC. At work I am currently using .NET MVC 2.0 on a variety of projects and thoroughly enjoy the platform. Specifically I enjoy the very minimal configuration required to get a project up and running (just create the project, define routes, and start coding), as well as the ability for controller actions to return different items (i.e. ActionResult, JsonResult). Another piece I really like is the way the view/model interaction can be handled. For example I like being able to call return View(model) and having a view page (.aspx) load and having the full model object available to the view, regardless of the model type. I'm looking for a PHP implementation of MVC that is the most similiar to what I am already familiar with. I don't anything apart from the MVC functionality. I've looked at Zend, Symfony, CodeIgniter, etc. and, while they look like they'll be fun to play with in the future, they provide much more functionality than I need. I'd prefer to write my own DAL, form helpers, delegate handlers, authentication/ACL pieces, etc. In short, I just need something to handle the routing and view interactions and will worry about the model implementation myself. Can someone please point me to some lightweight code that accomplishes or comes close to accomplishing my objectives above. Or, can someone identify just the portions of a larger framework that do the same (again, I'm not currently interested in implementing something on a big framework, just the MVC portion and want to implement the model portion myself as much as possible). Thanks in advance.

    Read the article

  • Efficient way to sort large set of numbers

    - by 7Aces
    I have to sort a set of 100000 integers as a part of a programming Q. The time limit is pretty restrictive, so I have to use the most time-efficient approach possible. My current code - #include<cstdio> #include<algorithm> using namespace std; int main() { int n,d[100000],i; for(i=0;i<n;++i) { scanf("%d",&d[i]); } sort(d,d+n); .... } Would this approach be more efiicient? int main() { int n,d[100000],i; for(i=0;i<n;++i) { scanf("%d",&d[i]); sort(d,d+i+1); } .... } What is the most efficient way to sort a large dataset? Note - Not homework...

    Read the article

  • Where does the "mm" come from in GTKmm, glibmm, etc

    - by Cole Johnson
    I understand that the "mm" suffix [in various GTK-associated C++ binding libraries] means "minus minus," but where exactly does it come from? I understand that there is a programming language called "C--," but if there were bindings (and I'm pretty sure I've seen some), they would be suffixed "--". TL;DR: Is there some page on gnu.org that explains the "mm" suffix in various C++ bindings or is it just a de facto standard adopted by the open source community with no reasoning behind it?

    Read the article

  • Writing cross-platforms Types, Interfaces and Classes/Methods in C++

    - by user827992
    I'm looking for the best solution to write cross-platform software, aka code that I write and that I have to interface with different libraries and platforms each time. What I consider the easiest part, correct me if I'm wrong, is the definition of new types, all I have to do is to write an hpp file with a list of typedefs, I can keep the same names for each new type across the different platforms so my codebase can be shared without any problem. typedefs also helps me to redefine a better scope for my types in my code. I will probably end up having something like this: include |-windows | |-types.hpp |-linux | |-types.hpp |-mac |-types.hpp For the interfaces I'm thinking about the same solution used for the types, a series of hpp files, probably I will write all the interfaces only once since they rely on the types and all "cross-platform portability" is ensured by the work done on the types. include | |-interfaces.hpp | |-windows | |-types.hpp |-linux | |-types.hpp |-mac | |-types.hpp For classes and methods I do not have a real answer, I would like to avoid 2 things: the explicit use of pointers the use of templates I want to avoid the use of the pointers because they can make the code less readable for someone and I want to avoid templates just because if I write them, I can't separate the interface from the definition. What is the best option to hide the use of the pointers? I would also like some words about macros and how to implement some OS-specifics calls and definitions.

    Read the article

  • Integrating application ad-support - best practice

    - by Jarede
    Considering the review that came out in March: Researchers from Purdue University in collaboration with Microsoft claim that third-party advertising in free smartphone apps can be responsible for as much as 65 percent to 75 percent of an app's energy consumption. Is there a best practice for integrating advert support into mobile applications, so as to not drain user battery too much. When you fire up Angry Birds on your Android phone, the researchers found that the core gaming component only consumes about 18 percent of total app energy. The biggest battery suck comes from the software powering third-party ads and analytics accounting for 45 percent of total app energy, according to the study. Has anyone invoked better ways of keeping away from the "3G Tail", as the report puts it. Is it better/possible to download a large set of adverts that are cached for a few hours, and using them to populate your ad space, to avoid constant use of the wifi/3g radios. Are there any best practices for the inclusion of adverts in mobile apps?

    Read the article

  • Organizing Business and Presentation entities

    - by simoneL
    Background I am developing a WPF project. This is the basic structure: User Interface (WPF Project); Interfaces (class library, contains all the interfaces and the entities used by the application; Modules (every module contains the logic of a specific argument, e.g. File Management, and can eventually contains Wpf User Controls). In the WPF Controls, to facilitate the binding operations I have created a BaseViewModel class which contains a Raise method that automates the binding mechanism (for further details, I used a technique similar to that one described in this article). The problem Understand which is the best way to separate Presentation form from the Business form in the entities classes. The case In the Interfaces project I have, for instance, the class User public class User { public virtual string Name { get; set; } // Other properties } In one of the modules I need to use the User class and to bind its properties to the User Interface controls. To do so I have to use a custom implementation of the get and set keywords. At first point, I thought to create a class in the Module called, for instance, ClientUser and override the properties that I need: public class ClientUser : User { private string name; public override string Name { get { return name; } set { Raise(out name, value); } } // Other properties } The problem is the Raise method, which is declared in the BaseViewModel class, but due to C# single inheritance constraint, I can't inherit from both classes. Which is the right way to implement this architecture?

    Read the article

  • null pointers vs. Null Object Pattern

    - by GlenH7
    Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is than an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting.

    Read the article

  • Good PHP books for starters, any recommendations?

    - by Goma
    I started reading some PHP books. Most of them in their introduction say that this book , unlike other books, it follows a good habits and practices. Now, I do not know which book tells the truth, and which writer is the most experienced in PHP. These are the books that I had a quick look to their first chapter: PHP and MySQL Web Development (Developer's Library) by Luke Welling and Laura Thomson. Build Your Own Database Driven Web Site Using PHP & MySQL by Kevin Yank. PHP and MySQL for Dummies by Janet Valade. Now, it's your time to advise me and tell me about the excellent one that follows best practices, please give an advice from your experience. (It could be any other book!). Regards,

    Read the article

  • Why a static main method in Java and C#, rather than a constructor?

    - by Konrad Rudolph
    Why did (notably) Java and C# decide to have a static method as their entry point – rather than representing an application instance by an instance of an Application class, with the entry point being an appropriate constructor which, at least to me, seems more natural? I’m interested in a definitive answer from a primary or secondary source, not mere speculations. This has been asked before. Unfortunately, the existing answers are merely begging the question. In particular, the following answers don’t satisfy me, as I deem them incorrect: There would be ambiguity if the constructor were overloaded. – In fact, C# (as well as C and C++) allows different signatures for Main so the same potential ambiguity exists, and is dealt with. A static method means no objects can be instantiated before so order of initialisation is clear. – This is just factually wrong, some objects are instantiated before (e.g. in a static constructor). So they can be invoked by the runtime without having to instantiate a parent object. – This is no answer at all. Just to justify further why I think this is a valid and interesting question: Many frameworks do use classes to represent applications, and constructors as entry points. For instance, the VB.NET application framework uses a dedicated main dialog (and its constructor) as the entry point1. Neither Java nor C# technically need a main method. Well, C# needs one to compile, but Java not even that. And in neither case is it needed for execution. So this doesn’t appear to be a technical restriction. And, as I mentioned in the first paragraph, for a mere convention it seems oddly unfitting with the general design principle of Java and C#. To be clear, there isn’t a specific disadvantage to having a static main method, it’s just distinctly odd, which made me wonder if there was some technical rationale behind it. I’m interested in a definitive answer from a primary or secondary source, not mere speculations. 1 Although there is a callback (Startup) which may intercept this.

    Read the article

  • Where can I find out the following info on python (coming from Ruby)

    - by Michael Durrant
    I'm coming from Ruby and Ruby on Rails to Python. Where can I find or find resources about: The command prompt, what is python's version of 'irb' django, what is a good resource for installing, using, etc. pythoncasts... is there anything like railscats, i.e. good video tutorials web sites with the api info about what version have what and which to use. info and recommendations on editors, plugins and IDE's common gotchas for newbies and good things to know at the outset scaling issues, common reasons what is the equivalent of 'gems', i.e. components I can plug in what are popular plugins for django authentication and forms similar to devise and simple_form testing, what's available, anything similar to rspec? database adapters - any preferences? framework info - is django MVC like rails? OO'yness. Is everything an object that gets send messages? Different paradign? syntax - anything like jslint for checking for well-formed code?

    Read the article

  • Java Cryptography Extension

    - by Adam Tannon
    I was told that in order to support AES256 encryption inside my Java app that I would need the JCE with Unlimited Strength Jurisdiction Policy Files. I downloaded this from Oracle and unzipped it and I'm only seeing 2 JARs: local_policy.jar; and US_export_polic.jar I just want to confirm I'm not missing anything here! My understanding (after reading the README.txt) is that I just drop these two into my <JAVA_HOME>/lib/security/ directory and they should be installed. By the names of these JARs I have to assume that its not the Java Crypto API that cannot handle AES256, but it's in fact a legal issue, perhaps? And that these two JARs basically tell the JRE "yes, it's legally-acceptable to run this level of crypto (AES256)." Am I correct or off-base?

    Read the article

  • Is there any way to test how will the site perform under load

    - by Pankaj Upadhyay
    I have made an Asp.net MVC website and hosted it on a shared hosting provider. Since my website surrounds a very generic idea, it might have number of concurrent users sometime in future. So, I was thinking of a way to test my website for on-load performance. Like how will the site perform when 100 or 1000 users are online at the same time and surfing the website. This will also make me understand whether my LINQ queries are well written or not.

    Read the article

  • AJAX event, prevents other page actions

    - by cobaltduck
    Here's a fairly average scenario, using JSF as an example, but this same concept I have observed in ASP.NET, Apache Wicket, and other frameworks with ajax capabilities. <h:inputText id="text1" value="#{myBacker.myBean.myStringVar}" styleClass="goodCSS"> <f:ajax event="change" listener="#{myBacker.text1ChangeEventMethod}" update="someOtherField" /> </h:inputText> <h:selectBooleanCheckbox id="check1" value="#{myBacker.myBean.myBoolVar}" /> Let's suppose that the 'text1ChangeEventListener' is essential to 'someOtherField' and perhaps toggles its disabled attribute, or changes its available options, based on the value of 'myStringVar.' The particulars aren't important, let's just accept that for some reason we need an ajax call when the 'text1' value is changed. So Jane User is working her way down the form. She arrives at the 'text1' field and types some value. The cursor focus is still in the text field, as she moves her mouse to the 'check1' box and clicks. It appears to her that nothing has happened. She clicks again, and this time the checkbox highlights and the icon indicating a selection appears in the box. Jane has to do several entries in the form today, and sees this happen every time, and it becomes very frustrating for her. Likewise, Jeff Admin is also perusing this form, and begins to type in 'text1.' He then realizes he doesn't really want to enter this data, and so moves his mouse to the "cancel" button elsewhere on the page, and clicks. Nothing seems to happen. Jeff clicks again, and after confirming he really does want to cancel, is returned to the home page. Jeff scratches his head. The problem is simply that the first thing the system does after 'text1' looses focus is run the listener and perform the ajax operation. It may only take a fraction of a second, but still, you can click other buttons all you want, but until that ajax has finished, everything else is ignored. I've spent the morning searching and reading, and it seems no one else has even noticed this. I could find not one article, blog, past question here or at SO, or anyting that addresses this obvious and glaring deficiency in ajax. So first of all, am I truly alone in thinking this is a big problem? Second, does anyone have a solution?

    Read the article

  • Website development from scratch v/s web framework [duplicate]

    - by Ali
    This question already has an answer here: What should every programmer know about web development? 1 answer Do people develop websites from scratch when there are no particular requirements or they just pick up an existing web framework like Drupal, Joomla, WordPress, etc. The requirements are almost similar in most cases; if personal, it will be a blog or image gallery; if corporate, it will be information pages that can be updated dynamically along with news section. And similarly, there are other requirements which can be fulfilled by WordPress, Joomla or Drupal. So, Is it advisable to develop a website from scratch and why ? Update: to explain more as got commentt from @Raynos (thanks for comment and helping me clearify the question), the question is about: Should web sites be developed and designed fully from scratch? Should they be done by using framework like Spring, Zend, CakePHP? Should they be done using CMS like Joomla, WordPress, Drupal (people in east are using these as frameworks)?

    Read the article

  • Does Open Source lead to bad coding?

    - by David Conde
    I have a thought that I tried asking at SO, but didnt seem like the appropriate place. I think that source sites like Google Code, GitHub, SourceForge... have played a major role in the history of programming. However, I found that there is another bad thing to these kind of sites and that is you may just "copy" code from almost anyone, not knowing if it is good(tested) source or not. This line of thought has taken me to believe that source code websites tend to lead many developers (most likely unexperienced) to copy/paste massive amounts of code, which I find just wrong. I really dont know how to focus the question well, but basic thought would be: Is this ok? Is Open Source contributing to that or I'm just seeing ghosts... Hope people get interested because I think this is an important theme.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >