Search Results

Search found 8219 results on 329 pages for 'less'.

Page 153/329 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Should I expect my peers to read or practice on a regular basis? [closed]

    - by Joshua Smith
    I've been debating asking this question for some time. Based several of the comments I read in this question I decided I had to ask. This feels like I'm stating the obvious, but I believe that regular reading (of books, blogs, StackOverflow, whatever) and/or practice are required just to stay current (let alone excel) in whichever stack you use to pay the bills, not to mention playing with things outside your comfort zone to learn new ways of doing things. Yet, I virtually never see this from many of my peers. Even when I go out of my way to point out useful (and almost always free) learning material, I quite often get a sense of total apathy from those I'm speaking to. I'd even go so far as to say that if someone doesn't try to improve (or at least stay current), they'll atrophy as technology advances and actually become less useful to the company. I don't expect people to spend hours a day studying or practicing. I have two young kids and hours of practice simply aren't feasible. Still, I find some time; perhaps on the train, at lunch, in bed for a few minutes, whatever. I'm willing to believe this is arrogance or naivete on my part, but I'd like to hear what the community has to say. So here's my question: Should I expect (and encourage) the same from my peers, or just keep my mouth shut and do my own thing?

    Read the article

  • Move unity launcher to bottom of the screen

    - by argvar
    I have Ubuntu 13.04 DESKTOP version and for some odd reason I'm told that the Unity launcher cannot be moved to the bottom of the screen because of several reasons: 1. Canonical wants it there so it fits with their overall design goals, namely when it comes to touchscreen devices and netbooks. This in my mind totally ignores the fact that most Ubuntu users are DESKTOP users. No matter what Canonicals long term goal is, it surely mustn't be at the expense of needs of their core user base. 2. Most monitors are widescreen, the launcher is more compact where it is. This is not only taking away the users choice, but is also a wrong assessment. Widescreen monitors can sometimes be rotated on a pivot, giving it a portrait aspect. By displaying the Unity launcher on the left side it takes up a lot of space. Many desktop users have multiple monitors, and having the launcher on the left side of each monitor is very awkward. Also, many websites are catered to fit on a half 1920 display, so you can have two browser windows open side-by-side with all content visible. The placement of the Unity launcher takes away the horizontal space meaning there's less room for each browser window, and you'll see the right side of the web pages being occluded. Any suggestion to simply hide the Unity launcher, or "Canonical knows best" or "get used to it" are unwelcome and totally ignores the above points. Linux is about choice. Canonical's stubbornness with the Unity launcher placement is inconsistent with what Linux is about.

    Read the article

  • Create a system image in Windows 8

    - by Greg Low
    One of the things that I've just come to accept is that the designers of Windows 8 and I think very differently.It'll take a long time to convince me that shutting down the computer is a "setting". Even after using Windows 8 for quite a while now, I still find that I struggle nearly every day, just trying to do things that I previously knew how to do. That's just not a good thing.Today I decided to create a system image as I hadn't made one lately. I started in Control Panel looking for Backup options. That yielded nothing except programs that wanted to "Save backup copies of my files with file history". I thought "oh well, let's just try the new search options". I hit the Windows key and typed "Backup". No, nothing came up there either.I searched again all over the Control Panel options to no avail.So it was time to hit Google again. Once again, clearly lots of people used to know how to do this and have been trying to work out where this option went.The first trick is that there are a bunch of Control Panel options that don't appear in the Control Panel. In the address bar at the top, if you click on Control Panel, you'll find there is an option that says "All Control Panel Options". That is curious given that's where I thought I was when I opened Control Panel. No hint is given on that screen that there are a bunch of hidden options. None the less, I then checked out "all" the options.The option that you need to create a system image in Windows 8 turns out to be the "Windows 7 File Recovery" option that appears in this extended list. Why does it say "Windows 7" when it's for "Windows 8" as well and I'm running "Windows 8"? Why do I have to choose an option that says "File Recovery" to create a system image backup?<sigh>But at least I've recorded it here for the next time I forget where to find it.

    Read the article

  • Importing an existing project into Git

    - by Andy
    Background During the course of developing our site (ASP.NET), we discovered that our existing source control (SourceGear Vault) wasn't working for us. So, we decided to migrate to Git. The translation has been less than smooth though. Our site is broken up into three environments DEV, QA, and PROD. For tho most part, DEV and the source control repo have been in sync with each other. There is one branch in the repo, if a page was going to be moved up to QA then the file was moved manually, same thing with stuff that was ready for PROD. So, our current QA and PROD environments do not correspond to any particular commit in the master branch. Clarification: The QA and PROD branches are not currently, nor have they ever been in source control. The Question How do I move QA and PROD into Git? Should I forget about the history we've maintained up to this point and start over with a new repo? I could start with everything on PROD, then make a branch and pull in everything from QA, and then make another branch off of that with DEV. That way not only will the branches reflect the differences in the environments, they'll be in the right order chronologically with the newest commits in the DEV branch. What I've tried so far I thought about creating a QA branch off of the current master and using robocopy to make the working folder look like the current QA environment. This doesn't work because the new commit from QA will remove new files from DEV and that will remove them when we merge up, I suspect there will be similar problems if I started QA at an earlier (though not exact) commit from DEV.

    Read the article

  • Project Jigsaw: On the next train

    - by Mark Reinhold
    I recently proposed to defer Project Jigsaw from Java 8 to Java 9. Feedback on the proposal was about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. The ultimate decision rested, of course, with the Java SE 8 (JSR 337) Expert Group. After due consideration, a strong majority of the EG agreed to my proposal. In light of this decision we can still make progress in Java 8 toward the convergence of the higher-end Java ME Platforms with Java SE. I previously suggested that we consider defining a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. JEP 161 lays out a specific initial proposal for such Profiles. There is also much useful work to be done in Java 8 toward the fully-modular platform in Java 9. Alan Bateman has submitted JEP 162, which proposes some changes in Java 8 to smooth the eventual transition to modules, to provide new tools to help developers prepare for modularity, and to deprecate and then, in Java 9, actually remove certain API elements that are a significant impediment to modularization. Thanks to everyone who responded to the proposal with comments and questions. As I wrote initially, deferring Jigsaw to a Java 9 release in 2015 is by no means a pleasant decision. It does, however, still appear to be the best available option, and it is now the plan of record.

    Read the article

  • I am not the most logically-organized person. Do I have any chance at being a good 'low-level' programmer?

    - by user217902
    Background: I am entering college next year. I really enjoy making stuff and solving logical problems, so I'm thinking of majoring in compsci and working in software development. I hope to have the kind of job where I can work with implementing / improving algorithms and data structures on a regular basis.. as opposed to, say, a job that's purely concerned with mashing different libraries together, or 'finding the right APIs for the job'. (Hence the word 'low-level' in the title. No, I don't wish to write assembly all day.) Thing is, I've never been the most logically-sharp person. Thus far I have only worked on hobby projects, but I find that I make the silliest of errors ever so often, and it can take me ages to find it. Like anywhere between three hours to a day to locate a simple segfault, off-by-one error, or other logical mistake. (Of course, I do other things in the meantime, like browsing SO, reddit, and the like..) It's not like I'm 'new' to programming either; I first tried C++ maybe five years ago. My question is: is this normal? Should a programmer with any talent solve it in less time? Having read Spolsky's Smart and gets things done, where he talks about the large variance in programming speed, am I near the bottom of the curve, and therefore destined to work in companies that cannot afford to hire quality programmers? I'd like to think that conceptually I'm okay -- I can grasp algorithms and concepts pretty well, I do fine in math and science, although I probably drop signs in my equations more often than the next guy. Still, grokking concepts makes me happy, and is the reason why I want to work with algorithms. I'm hoping to hear from those of you with real-world programming experience. TL;DR: I make many careless mistakes, should I not consider programming as a career?

    Read the article

  • Spring Cleaning

    - by Tim Dexter
    I recently got a shiny new laptop; moving my shiz from old to new, was not the nightmare it used to be. I have gotten into the habit of using a second hard drive in the media bay where the CDROM normally sits. That drive contains my life's work with BIP. I can pull it out and plug it into another machine very easily. I have been sorting through some old directories and files, archiving some, sharing others with colleagues. For instance, a little dated but if you were looking for a list of Publisher reports available in EBS R12.1, here it is. Im trying to track down a more recent R12 instance and will re-post the document. I also found another gem; its a little out there in terms of usefulness but Im sharing it none the less. You can embed, locally or remotely reference SVG graphics (in XML format) and bring the images into the BIP outputs. Template and sample data here. A nice set of templates showing page number control and page suppression - they will need some explanation, so I'll save them for another post. The list goes on but I'll save them for later. Back to the clean up!

    Read the article

  • FP for simulation and modelling

    - by heaptobesquare
    I'm about to start a simulation/modelling project. I already know that OOP is used for this kind of projects. However, studying Haskell made me consider using the FP paradigm for modelling a system of components. Let me elaborate: Let's say I have a component of type A, characterised by a set of data (a parameter like temperature or pressure,a PDE and some boundary conditions,etc.) and a component of type B, characterised by a different set of data(different or same parameter, different PDE and boundary conditions). Let's also assume that the functions/methods that are going to be applied on each component are the same (a Galerkin method for example). If I were to use an OOP approach, I would create two objects that would encapsulate each type's data, the methods for solving the PDE(inheritance would be used here for code reuse) and the solution to the PDE. On the other hand, if I were to use an FP approach, each component would be broken down to data parts and the functions that would act upon the data in order to get the solution for the PDE. This approach seems simpler to me assuming that linear operations on data would be trivial and that the parameters are constant. What if the parameters are not constant(for example, temperature increases suddenly and therefore cannot be immutable)? In OOP, the object's (mutable) state can be used. I know that Haskell has Monads for that. To conclude, would implementing the FP approach be actually simpler,less time consuming and easier to manage (add a different type of component or new method to solve the pde) compared to the OOP one? I come from a C++/Fortran background, plus I'm not a professional programmer, so correct me on anything that I've got wrong.

    Read the article

  • Are specific types still necessary?

    - by MKO
    One thing that occurred to me the other day, are specific types still necessary or a legacy that is holding us back. What I mean is: do we really need short, int, long, bigint etc etc. I understand the reasoning, variables/objects are kept in memory, memory needs to be allocated and therefore we need to know how big a variable can be. But really, shouldn't a modern programming language be able to handle "adaptive types", ie, if something is only ever allocated in the shortint range it uses fewer bytes, and if something is suddenly allocated a very big number the memory is allocated accordinly for that particular instance. Float, real and double's are a bit trickier since the type depends on what precision you need. Strings should however be able to take upp less memory in many instances (in .Net) where mostly ascii is used buth strings always take up double the memory because of unicode encoding. One argument for specific types might be that it's part of the specification, ie for example a variable should not be able to be bigger than a certain value so we set it to shortint. But why not have type constraints instead? It would be much more flexible and powerful to be able to set permissible ranges and values on variables (and properties). I realize the immense problem in revamping the type architecture since it's so tightly integrated with underlying hardware and things like serialization might become tricky indeed. But from a programming perspective it should be great no?

    Read the article

  • How to avoid "DO YOU HAZ TEH CODEZ" situations?

    - by volothamp
    I have a strange situation at work, where a colleague of mine often asks me and other co-workers for working code. I would like to help him, but this constant request of trivial snippets interrupts my thoughts and sometimes makes it hard to concentrate. Plus, I have the impression (...) that this requests are generated by lack of competence, more than by laziness. In fact, he often asks things pretending to know the answer, since when I solve the problem he usually says things like "Sure", "Yes, that's what I thought", giving me the impression that my answer isn't worth it. How can I solve this embarrassing situation? Should I show more explicitly in front of other colleagues his lack of knowledge (by saying things like: "do it yourself if you can, please") or continue giving him what he wants? I think that he should aggregate all his answers in one, so that I can give him a portion of my time and he can work all by himself on his things. There is no hierarchy in the team, I must say we both have a similar seniority of five years, more or less. For the same reason I believe I cannot report to management, since trivial questions are often ignored. I discussed with other two members and they agree with me: in fact he often ask things cycling through colleagues.

    Read the article

  • Deploying InfoPath forms &ndash; idiosyncrasies

    - by PointsToShare
    Well, I have written a sophisticated PowerShell script to expedite the deployment of InfoPath forms - .XSN file.  Along the way by way of trial and error (mostly error and error), I discovered a few little things. Here they are. •    Regardless of how the install command is run – PowerShell or the GUI in Central Admin – SharePoint enwraps the XSN inside a solution – WSP, then installs and deploys the solution. •    The solution is named by concatenating “form-“ with the first 16 characters (or less if the file name is shorter than 16) of the file name and the required WSP at the end. So if the form name was MyInfopathForm.xsn the solution name will be form-MyInfopathForm.wsp, but for WithdrawalOfRequestsForRefund.xsn it will be named form-WithdrawalOfRequ.wsp •    It only gets worse! Had there already been a solution file with the same name, Microsoft appends a three digit number to the name, like MyInfopathForm-123.wsp. Remember a digit is a finger, I suspect a middle finger, so when you deploy the same form – many versions of it, or as it was in my case – testing a script time and again, you’ll end up with many such digit (middle finger) appended solutions, all un-deployed except the last one. This is not a bug. It’s a feature!   Well, there are ways around it. When by hand, remove the solution from the solution store before deploying the form again. In the script I do the same thing. And finally - an important caveat; Make sure that all your form names are unique in the first 16 characters. If you also have a form with the name forWithdrawalOfRequestForRelief.xsn, you’re in trouble! That’s all folks!

    Read the article

  • Advantages of Scala vs. Groovy with JAVA EE 6 Applications.

    - by JAVA EE Wannabe
    Please let me first emphasize that I am not looking for flare wars. I just want advices from people who have real experiences. I started learning JAVA EE 6 as real newbie and am having had time choosing what tools to use. First problem is what is the advantage of using Scala vs. Groovy with Java EE 6 apps over Java? I've seen on some blogs people mentioning you gonna write less code but as a newbie I don't know what other advantages and disadvantages are there. Second problem is Netbeans 6.9 or Helios 3.6.1? I realized that with eclipse I can easily mix EE 6 applications with Groovy or Scala without any problems (I only did this by displaying a String message from Scala and Groovy classes.). With Netbeans the only I can think of is having separate Java project libraries and using the jars in my web app. But also realize to the extent of my little knowledge Netbeans has better support for Java EE 6. Please need your expert advice. Thanks.

    Read the article

  • is wisdom of what happens 'behind scenes' (in compiler, external DLLs etc.) important?

    - by I_Question_Things_Deeply
    I have been a computer-fanatic for almost a decade now. I've always loved and wondered how computers work, even from the purest, lowest hardware level to the very smallest pixel on the screen, and all the software around that. That seems to be my problem though ... as I try to write code (I'm pretty fluent at C++) I always sit there enormous amounts of time in front of a text-editor wondering how every line, statement, datum, function, etc. will correspond to every Assembly and machine instruction performed to do absolutely everything necessary for the kernel to allocate memory to run my compiled program, and all of the other hardware being used as well. For example ... I would write cout << "Before memory changed" << endl; and run the debugger to get the Assembly for this, and then try and reverse disassemble the Assembly to machine code based on my ISA, and then research every .dll, library file, linked library, linking process, linker source code of the program, the make file, the kernel I'm using's steps of processing this compilation, the hardware's part aside from the processor (e.g. video card, sound card, chipset, cache latency, byte-sized registers, calling convention use, DDR3 RAM and disk drive, filesystem functioning and so many other things). Am I going about programming wrong? I mean I feel I should know everything that goes on underneath English syntax on a computer program. But the problem is that the more I research every little thing the less I actually accomplish at all. I can never finish anything because of this mentality, yet I feel compelled to know everything... what should I do?

    Read the article

  • Is there a product planning tool that has these specific features? [closed]

    - by acjohnson55
    I am working on a web startup in the early stages, and we are struggling a bit to manage the scope and scheduling of our product. We have loads of high-level features in the pipeline, but we need a good way of scheduling them for release iterations and breaking them into actual tasks that can be scheduled (that could be a separate tool, but integration would be preferred). I would say that our product can be pretty cleanly divided into "aspects", and we want to be able to separate features by the aspect to which they apply. Perhaps most importantly, it should be really simple to create and move features between target release points. We don't have physical space for a war room type setup, so whatever we settle upon should ideally have a cloud-type web interface. Right now, we're using Excel to make a grid of product aspects vs. target releases, and we store features at the intersections. But this is not providing a good way of indexing tasks to those features or being able to move them around. I would much rather have something that automates the grid overview. I'm less interested in something that helps with low-level scheduling than I am in something that is good at organizing the product plan at the long-term, high-level view. Is there a product planning tool out there that matches these specifications?

    Read the article

  • Best way to calculate unit deaths in browser game combat?

    - by MikeCruz13
    My browser game's combat system is written and mechanically functioning well. It's written in PHP and uses a SQL database. I'm happy with the unit balance in relation to one another. I am, however, a little worried about how I'm calculating unit deaths when one player attacks another because the deaths seem to pile up a little fast for my taste. For this system, a battle doesn't just trigger, calculate winner, and end. Instead, it is allowed to go for several rounds (say one round every 15 mins.) until one side passes a threshold of being too strong for the other player and allows players to send reinforcements between rounds. Each round, units pair up and attack each other. Essentially what I do is calculate the damage: AP = Attack Points HP = Hit Points Units AP * Quantity * Random Factors * other factors (such as attrition) I take that and divide by the defending unit's HP to find the number of casualties of defending units. So, for example (simplified to take out some factors), if I have: 500 attackers with 50 AP vs 1000 defenders with 100 HP = 250 deaths. I wonder if that last step could be handled better to reduce the deaths piling up. Some ideas: I just change all the units with more HP? I make sure to set the Attacking unit's AP to be a max of the defender's HP to make sure they only kill 1 unit. (is that fair if I have less huge units vs many small units?) I spread the damage around more by including the defending unit's quantity more? i.e. in that scenario some are dead and some are 50% damage. (How would I track this every round?) Other better mathematical approaches?

    Read the article

  • Oracle MDM Panel at OOW 12: Best practices, Lessons Learned and More...

    - by Mala Narasimharajan
    By Narayana Machiraju  We are less than two weeks out from the start of Oracle Open World 2012. The MDM team has built-up a solid line-up of product and customer sessions for you to attend this year in addition to the hands-on labs, and numerous demonstration pods in Moscone West. This year we will be hosting a customer panel session dedicated to Oracle Customer Hub at Oracle Open World. An esteemed panel of Oracle Customer Hub customers in different Industries: Credit Suisse, Allianz and Elsevier will provide insight into the journey of Customer MDM right from building a business case and MDM vision, establishing and sustaining governance, implementation strategies and realizing the benefits. You will also hear about implementation challenges, phasing strategies and lessons learned from real-life experiences. If you are already implementing Customer MDM or evaluating the benefits of MDM and you would like to hear directly from our customers then I highly recommend you attend this session: Customer MDM Panel: Discussion and Q&A on Implementation Best Practices, Data Quality, Data Governance          and ROI Wednesday October, 3rd, 5:00PM - 6:00PM Westin Market Street Hotel - Metropolitan 1 The MDM track at Oracle Open World covers variety of topics related to MDM. In addition to the product management team presenting product updates and roadmap, we have several customer panels, Conference sessions and Customer round table sessions featuring a lot of marquee Customers. You can see an overview of MDM sessions here.  We hope to see you at Open World and stay in touch via our future blogs.

    Read the article

  • I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS?

    - by user2567
    I try to understand the benefits of distributed version control system (DVCS). I found Subversion Re-education and this article by Martin Fowler very useful. Mercurial and others DVCS promote a new way of working on code with changesets and local commits. It prevents from merging hell and other collaboration issues We are not affected by this as I practice continuous integration and working alone in a private branch is not an option, unless we are experimenting. We use a branch for every major version, in which we fix bugs merged from the trunk. Mercurial allows you to have lieutenants I understand this can be useful for very large projects like Linux, but I don't see the value in small and highly collaborative teams (5 to 7 people). Mercurial is faster, takes less disk space and full local copy allows faster logs & diffs operations. I'm not concerned by this either, as I didn't notice speed or space problems with SVN even with very large projects I'm working on. I'm seeking for your personal experiences and/or opinions from former SVN geeks. Especially regarding the changesets concept and overall performance boost you measured. UPDATE (12th Jan): I'm now convinced that it worth a try. UPDATE (12th Jun): I kissed Mercurial and I liked it. The taste of his cherry local commits. I kissed Mercurial just to try it. I hope my SVN Server don't mind it. It felt so wrong. It felt so right. Don't mean I'm in love tonight. FINAL UPDATE (29th Jul): I had the privilege to review Eric Sink's next book called Version Control by Example. He finished to convince me. I'll go for Mercurial.

    Read the article

  • Dealing with numerous, simultaneous sounds in unity

    - by luxchar
    I've written a custom class that creates a fixed number of audio sources. When a new sound is played, it goes through the class, which creates a queue of sounds that will be played during that frame. The sounds that are closer to the camera are given preference. If new sounds arrive in the next frame, I have a complex set of rules that determines how to replace the old ones. Ideally, "big" or "important" sounds should not be replaced by small ones. Sound replacement is necessary since the game can be fast-paced at times, and should try to play new sounds by replacing old ones. Otherwise, there can be "silent" moments when an old sound is about to stop playing and isn't replaced right away by a new sound. The drawback of replacing old sounds right away is that there is a harsh transition from the old sound clip to the new one. But I wonder if I could just remove that management logic altogether, and create audio sources on the fly for new sounds. I could give "important" sounds more priority (closer to 0 in the corresponding property) as opposed to less important ones, and let Unity take care of culling out sound effects that exceed the channel limit. The only drawback is that it requires many heap allocations. I wonder what strategy people use here?

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • How much a programmer should read in order to keep himself updated? [closed]

    - by anything
    There are lots of technical books available. Below are few links which lists some good books If you could only have one programming related book on your bookshelf what would it be and why? What non-programming books should a programmer read to help develop programming/thinking skills? Best books on the theory and practice of software architecture? http://stackoverflow.com/questions/1711/what-is-the-single-most-influential-book-every-programmer-should-read ... and the list can go on and on and on. It will be really difficult to read all of the above mentioned books. I am not sure if its even possible for anyone to do that. Even if you filter it based on one's area of interest or work, list is still very large. .. and the technology keeps on changing (even more books :-( ) So, my question is how much a programmer should read lets say per year? How much hours one should put in such activities to keep oneself up to date? How do we find out the time required? PS: Average programmer reads less than one book per year (Code complete). What about the good programmers?

    Read the article

  • Career advice on whether to stick with coding or move on to tech. lead\management [closed]

    - by flk
    I'm at a point in my career where I need to decide what to do next. I've mainly done C# desktop development (with a little python and Silverlight) for 5 or 6 years and I'm trying to decide whether to start learning JavaScript\HTML or to moving into a role where I do less coding and more tech. lead\management role. With all the talk around HTML5\JavaScript, the rise of mobile and the changes with Windows 8 (metro at least) I wonder if I should stick with coding to get some experience in these areas before moving on. But at the same time if I decide stick with coding for a ‘couple more years’ I will probably be faced with the same situation with some other new\interesting technology that I feel I should learn before moving on. I feel if I stick just with coding I'm limiting my career options but if I move to tech. lead\management I will loose my coding skills. Is going one direction or the other going to limiting my career options in the future? I know that there is no real answer to this question so I’m really just looking for some thoughts from others and perhaps experiences from other people that faced similar situations. Thanks

    Read the article

  • Rendering design. How can I effectively deal with forward, deferred and transparent rendering?

    - by user1423893
    I have many objects in my game world that all derive from one base class. Each object will have different materials and will therefore be required to be drawn using various rendering techniques. I currently use the following order for rendering my objects. Deferred Forward Transparent (order independent) Each object has a rendering flag that denotes which one of the above methods should be used. The list of base objects in the scene are then iterated through and added to separate lists of deferred, forward or transparent objects based on their rendering flag value. The individual lists are then iterated through and drawn using the order above. Each list is cleared at the end of the frame. This methods works fairly well but it requires different draw methods for each material type. For example each object will require the following methods in order to be compatible with the possible flag settings. object.DrawDeferred() object.DrawForward() object.DrawTransparent() It is also hard to see where methods outside of materials, such as rendering shadow maps, would fit using this "flag & method" design. object.DrawShadow() I was hoping that someone may have some suggestions for improving this rendering process, possibly making it more generic and less verbose?

    Read the article

  • Downclock CPU reaching critical temperature

    - by Draga
    I have an HP tx1250 laptop. It always had serious overheating problems and although usually it runs fine I'm now running a continuous test for my dissertation, this brings the CPU temp close to the critical and from time to time the computer shutdown for reaching it (checked the log). I use to have the same problem on Win XP but I noticed Win Vista and 7 downclock the CPU when is necessary to cool it down so I was thinking if the same is possible on Ubuntu 12. The only program I've found that may do the job is computer temp ( http://computertemp.berlios.de/ ) but it doesn't seems to work under Ubuntu 12. The inside of the laptop is fairly clean, the thermal paste is quite recent, I'm keeping it lifted from the desk and judjung by the sound of the fan that's running fine as well. The pc in now running between 78 and 91 degrees C but about once a day it shut down for reaching 95. I need the results of the test it's running pretty soon so it's important that it runs non-stop. I've though to set the maximum clock of the CPU to slightly less the maximum but then these tests I'm running would take much more time. Thanks! PS: first and last HP laptop for me.

    Read the article

  • Binding in the view or the controller?

    - by da_b0uncer
    I've seen 2 different approaches with MVC on the web. One, like in ExtJS, is to bind the callbacks to the view via the controller. Finding every element on the view and adding the functionallity. The other, like in angular.js and in the lift-framework server-side, too, is to bind in the views and just write the functionallity in the controller. Which is better and cleaner? The ExtJS approach has dumb views and all the logic in the controller. Which seems clean to me. I had problems with global IDs for GUI-elements or relative navigation to GUI-elements in this approach. When I changed the view, the controller couldn't find the buttons anymore or I had multiple instances of one button with the same ID on a single application, because of the global ID. But I solved this with IDs that are only global in a view and can be on the application multiple times. So I could mess with the (dumb) views layout and design and the functionallity wouldn't break. The angular.js approach with the bindings in the view don't has the problem with global IDs. Also, the person who changes something in the view layout has to know the IDs anyway, so the controller can put the data at the right spot. So if I write <a ng-click="doThis()" /> for angular.js and implement doThis() or <a lid="buttonwhichdoesthis" /> for extjs and find the element with the local id and add doThis() as handler on the controller side, seems to be not so different. The only thing is, the second one has one more layer of indirection, which seems cleaner. The first one seems somehow to cost less effort.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >