Search Results

Search found 2082 results on 84 pages for 'lessons learned'.

Page 43/84 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Can I assume interface oriented programming as a good object oriented programming?

    - by david
    I have been programming for decades but I have not been used to object oriented programming. But for recenet years, I had a great opportunity to learn OOP, its principles, and a lot of patterns that are great. Since I've learned OOP, I tried to apply them to a couple of projects and found those projects successful. Unfortunately I didn't follow extreme programming that suggests writing test first, mainly because their time frame were tight. What I did for those projects were Identify all necessary classes and create them with proper properties and methods whenever there is dependency between classes, write interface between them see if there is any patterns for certain relationships between classes to replace By successful, I meant that it was quick development effort, the classes can be reused better, and flexible enough so that another programmer does not have to change something else to fix another part. But I wonder if this is a good practice. Of course, I know I need to put writing unit tests first in my work process. But other than that, is there any problem with this approach - creating lots of interfaces - in long term?

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • Looking for suggestions: becoming a hireable, young programmer [closed]

    - by Dan
    I am a 17 year old Java programmer that has filled the last year with learning all of the ins and outs of Java - Using Eclipse, and the help of a friend of the family (a Java programming architect for some company), I have learned everything from serializing objects, basic networking, generics, reflection, multi-threading, code optimization and efficiency & some concurrency safety - built my own proxy class, and nowadays, I answer questions on Project Euler. I am seeking some suggestions though on where I go next, or where I go from here to get a job in programming. I dedicate at least an hour every day to coding, sometimes literally, the entire day, and I really have come to love the process. I just started reading Effective Java (v2), and learning Scala (as I see often, possibly the Java replacement) I will be going to college for Computer Science next year - and taking AP computer science this year (however, I took a practice exam and got an 87, only need a 60to70 to pass, so no need to study for it too much) -- I was wondering if getting the SE 7 OCA and OCP would help me in trying to get a programming job. I looked around and most people have said online that an OCA/OCP are practically useless, but, at my age do they make me any more credible? More or less, what would you recommend to get a job in programming these days - or distinguish yourself from the crowd? I have enough time and dedication to learn another language, or anything really. Thank you very much.

    Read the article

  • Is there any research out there on geographic differences in work environments (e.g., respect) for programmers?

    - by Ethel Evans
    One thing I've learned from this website is that software developers are not treated the same as what I've seen in the companies I've worked at, and some of the differences seem to be related to the culture or other factors of the geographical location where the programmer works. In some areas, it seems like programmers can expect many perks and a great deal of professional respect, but in others it sounds like programmers are seen as laborers who are told what to do and then should go do it without question. Even in just the USA, there seem to be major differences in "the norm" between the various regions of this country. I'm wondering how much of this is just my perception, and how much is real differences about how programmers are perceived in their different locations. Is there any research out there discussing major differences in programmer work environments or attitudes about how to treat or respect programmers by geography? I'd be interested in multiple articles tackling different ways of looking at this. Edit: Research, specifically, doesn't seem to be available, so I'm making the question broader. Any good, thoughtful writing on the topic of any kind available?

    Read the article

  • How do you properly organize a commercial game?

    - by Reactorcore
    For the past months I've been studying programming and I've finally learned how to code, but one thing that is confusing me is how to properly organize the design of a game project - code wise. The game I'm building is a pretty standard commercial game. It has the basic components of a normal game: A world, characters and items interacting with each other and all of this is run by game manager. Basically you play as a hero in a world and do stuff. Fight, explore and interact. Think of your standard adventure game that starts off with an intro, goes to the menu system, then gets into the game and back to the menu. Pretty much like 99% of any commercial game or otherwise serious game projects. Thats what I'm aiming at. The problem is: How do you properly code a commercial game architecture? How do you organize it? How do you make it not become unmaintainable spaghetti code? What specific things to keep in mind when building this, codewise? How you can help me: a) Please tell how do you code your own game projects. What is your thought-process when designing the architecture? b) Recommend books, blogs, tutorials, videos or anything else on how to organize a commercial video game. c) Give hints and tips on do's/don'ts when building a game, codewise. Please help!

    Read the article

  • ArchBeat Link-o-Rama for December 14, 2012

    - by Bob Rhubart
    JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes | John-Brown Evans John Brown Evans' post continues the series of JMS articles that demonstrate how to use JMS queues in a SOA context. "This example leads you through the creation of an Oracle database Advanced Queue and the related WebLogic server objects in order to use AQ JMS in connection with a SOA composite," John explains. And if you missed the first 5 steps, don't worry – the post includes links. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Would you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Using Oracle Enterprise Manager Cloud Control 12c with Filer Snapshotting | Porus Homi Havewala This concise technical article includes a script for database backup using snapshots and cataloging in RMAN. Thought for the Day "A program which perfectly meets a lousy specification is a lousy program." — Cem Kaner Source: SoftwareQuotes.com

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • "From Russia with Love" - My Oracle Russian Experience

    - by cwarticki
    Two weeks ago, I traveled to Moscow, Russia. I had the pleasure of meeting with many of our Oracle Partners and Customers in the region.  I also worked with our Oracle Russia team throughout the week building many new friendships. The showcase for the week was an Oracle Support Strategy event for our Oracle Partners and Customers.  It was held at the Kateria-City Hotel, Moscow.  The Oracle Marketing team did an amazing job registering 100+ for the event, and nearly 100 were in attendance.           During the event, I spoke about many different topics. Part was a hands-on workshop to personalize your MOS Dashboard and configure Hot-Topics Email alerts.  Customers learned how to subscribe to newsletters and other Oracle information.  It covered a mulitude of Support Best Practices.  Additionally, I presented Platinum Services to the audience and my colleague Kristophe Hermans, from Oracle Belgium spoke on Proactive Support. In addition, I had the distinct privilege to meet one-on-one with our customers representing OJSC VimpelCom, MTC-Rus and Sberbank.  Pictured with me is Valery Yourinsky, Director of Technology Consulting Dept, FORS Distribution (Oracle Platinum Partner) Finally, I spent 2.5 days with my Oracle colleagues from Oracle Russia. They are super, hard-working, dedicated, customer-service professionals. All of them! I owe them all a debt of gratitude. Next time, we meet in Florida - ok? I am very appreciative to all our Oracle partners, customers and colleagues.  Thanks for hosting me and showing me a wonderful time in your country.  I look forward to my return. Sincerely,Chris WartickiGlobal Customer Management

    Read the article

  • Sucking Less Every Year?

    - by AdityaGameProgrammer
    Sucking Less Every Year -Jeff Atwood I had come across this insightful article.Quoting directly from the post I've often thought that sucking less every year is how humble programmers improve. You should be unhappy with code you wrote a year ago. If you aren't, that means either A) you haven't learned anything in a year, B) your code can't be improved, or C) you never revisit old code. All of these are the kiss of death for software developers. How often does this happen or not happen to you? How long before you see an actual improvement in your coding ? month, year? Do you ever revisit Your old code? How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the evolved stage where their coding doesn't need improvement or cant get improved anymore. Does this happen? If so how many years into coding on a particular language does one expect this to happen? Related: Ever look back at some of your old code and grimace in pain? Star Wars Moment in Code "Luke! I am your code!" "No! Impossible! It can't be!"

    Read the article

  • Python in Finance by Yuxing Yan, Packt Publishing Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/04/python-in-finance-by-yuxing-yan-packt-publishing-book-review.aspx I picked Python in Finance from Packt Publishing to review expecting to bore myself with complex algorithms and senseless formulas while seeing little actual Python in action, indeed at 400 pages plus it may seem so. But, it turned out to be quite the opposite. I learned a lot about practical implementations of various Python modules as SciPy, NumPy and several more, I think they empower a developer a lot. No wonder Python is on the track to become a de-facto scientist language of choice! But I am not going to compromise the truth, the book does discuss numerous financial terms, many of them, and this is where the enormous power of this book is coming from: it is like standing on the shoulders of a giant. Python is that giant - flexible and powerful, yet very approachable. The TOC is very detailed thanks to Packt, any one can see what financial algorithms are covered, I am only going to name a few which I had most fun with (though all of them are covered in enough details): Fama*, Fat Tail, ARCH, Monte-Carlo and of course the volatility smile! I am under an impression this book is best suited for students in Finance, especially those who are about to join the workforce, but I suspect the material in this book is very well suited for mature Financists, an investor who has some programming skills and wants to benefit from it, or even a programmer, or a mathematician who already knows Python or any other language, but wants to have fun in Quantitative Finance and earn a few buck! Pure fun, real results, tons of practical insight from reading data from a file to downloading trade data from Yahoo! Lastly, I need to complement Yuxing – he is a talented teacher, this book could not be what it is otherwise. It is a 5 out of 5 product. Disclaimer: I received a  free copy of this book for review purposes from the publisher.

    Read the article

  • C++ Numerical Recipes &ndash; A New Adventure!

    - by JoshReuben
    I am about to embark on a great journey – over the next 6 weeks I plan to read through C++ Numerical Recipes 3rd edition http://amzn.to/YtdpkS I'll be reading this with an eye to C++ AMP, thinking about implementing the suitable subset (non-recursive, additive, commutative) to run on the GPU. APIs supporting HPC, GPGPU or MapReduce are all useful – providing you have the ability to choose the correct algorithm to leverage on them. I really think this is the most fascinating area of programming – a lot more exciting than LOB CRUD !!! When you think about it , everything is a function – we categorize & we extrapolate. As abstractions get higher & less leaky, sooner or later information systems programming will become a non-programmer task – you will be using WYSIWYG designers to build: GUIs MVVM service mapping & virtualization workflows ORM Entity relations In the data source SharePoint / LightSwitch are not there yet, but every iteration gets closer. For information workers, managed code is a race to the bottom. As MS futures are a bit shaky right now, the provider agnostic nature & higher barriers of entry of both C++ & Numerical Analysis seem like a rational choice to me. Its also fascinating – stepping outside the box. This is not the first time I've delved into numerical analysis. 6 months ago I read Numerical methods with Applications, which can be found for free online: http://nm.mathforcollege.com/ 2 years ago I learned the .NET Extreme Optimization library www.extremeoptimization.com – not bad 2.5 years ago I read Schaums Numerical Analysis book http://amzn.to/V5yuLI - not an easy read, as topics jump back & forth across chapters: 3 years ago I read Practical Numerical Methods with C# http://amzn.to/V5yCL9 (which is a toy learning language for this kind of stuff) I also read through AI a Modern Approach 3rd edition END to END http://amzn.to/V5yQSp - this took me a few years but was the most rewarding experience. I'll post progress updates – see you on the other side !

    Read the article

  • 2011 Tech Goal Review

    - by kerry
    A year ago I wrote a post listing my professional goals for 2011.  I thought I would review them and see how I did. Release an Android app to the marketplace – Didn’t do it.  In fact, haven’t really touched Android much since I wrote that.  I still have some ideas but am not sure if I will get around to it. Contribute free software to the community – I did do this.  I have been collaborating with others via github more lately. Regularly attend a user group meetings outside of Java – Did not do this.  Family life being what it is makes this not that much of a priority right now. Obtain the Oracle Certified Web Developer Certification – Did not do this.  This is not much of a priority to me any more. Learn scala – I am about 50/50 on this one.  I read a few scala books but did not write an actual application. Write an app using JSF – Did not do this.  Still interested. Present at a user group meeting – I did a Maven presentation at the Java user group. Use git more, and more effectively – Definitely did this.  Using it on a daily basis now. Overall, I got about halfway on my goals.  It’s not too bad since I did do a few things that weren’t on my list. Learned to develop applications using GWT and deploy them to Google App Engine Converted one of my sites from PHP to Ruby / Sinatra (learning to use it in the process) Studied up on the HTML 5 features and did a lot of Javascript development

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • How often is your "Go-To" language the same as your favorite??

    - by K-RAN
    I know that there's already a question asking for your favorite programming language here. I'm curious though, what's your go-to language? The two can be very different. For example, I love Haskell. I learned it this past semester and I fell in love with it's very concise solutions and awesome syntax (I love theoretical math so something like fib = 1 : 1 : [ f | f <- zipWith (+) fibSeq (tail fibSeq)] makes my inner mathematician and computer scientist jump with joy!). However, the majority of my projects for classes and jobs have been in C/C++ & Java. As a result, most of the time when I'm testing something like an algorithm or Data Structure I go straight to C++. What about you guys? What languages do you love and why? What about your go-to language? What language do you use most often to get things done for work or personal projects and why? How often does a language fall into both categories??

    Read the article

  • Working with a company as a Junior Developer [closed]

    - by user1601973
    We all have started our careers in some way or other. Well, I am a college student based in North America & I am doing my second internship with the same company with which I did my first internship. I came back here because people here were helpful always and supportive. But it just happened today, and I wanted to share this on SO. Well since I started I have been doing documentation and that kind of stuff only as compared to my first internship in which I actually worked along with the developers & learned so many things. Well, I was in a conversation with my Team lead, and he asked me if I completed that particular work or not? Well, That particular work had slipped from my mind. He was indeed I kind of pissed, and said "You don't have to worry about it, I will figure out". Well, I felt so bad and was about to literally cry. I stopped my lunch and then went on to complete that work. I always ask for work in office, and I always try to be an asset for whoever I am working but this was the first time that it happened. What are your thoughts on this and should I apologise or not? I think I should.

    Read the article

  • What is the safest way to remove a swap partition?

    - by user212062
    I am running Ubuntu 12.04 on a 64-bit HP laptop with a 16 GB flash drive. I do not have a working hard drive right now. When I installed Ubuntu, I created a 2 GB swap partition on sdb1. I have since learned that swap partitions are generally a bad idea on flash drives, so I would like to use my swap space for my other partitions. You can see my partition scheme in the link below. I have read that I just have to comment sdb1 out of the fstab file, boot from a GParted live CD, select swapoff for sdb1, delete/merge with other partition, and everything's good. But, I've also read that messing with sdb1 can change the UUID of sdb2 or sdb3 and cause problems. Is this true? Does initramfs use swap at all? Also, when I get Ubuntu running on my laptop with an internal hard drive, does the swap partition help that much? I have 6 GB of DDR3. Does the rule of 1.5xActual RAM still apply? It seems like quite a bit to me. Thanks for the help! UPDATE: I have removed swap. The process I followed is: Right click swap partition in GParted and selected swapoff. Used # to comment the swap partition out of fstab. I tried to boot from a live GParted CD, but I kept getting an error, so I ran GParted in Ubuntu. Deleted swap partition in GParted. Unmounted /windows. Expanded /windows to take the remaining space. Mounted /windows. The / and /windows partitions each kept their own names and UUIDs, and everything is running fine. I have never seen any swap space being used before, and I don't intend to use the hibernate function, so I think removing swap was a good idea.

    Read the article

  • How to build the mainline kernel source package?

    - by Maxime R.
    Ubuntu kernel PPA only provides linux-headers*.deb and linux-image*.deb packages. How can I build the corresponding linux-source*.deb package ? Context: I'm currently running Ubuntu 11.10 with the mainline kernel (3.2 rc6 now) to get a better support for my sandybridge IGP (Dell E6420 laptop with intel i5-2520M CPU). Appears, i'd like to install this touchpad driver, ALPS touchpads being badly supported (see previous link bug report), while waiting for upstream support in kernel version 3.3. Problem is, DKMS keeps complaining about not finding the full kernel source: Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. Appears I may not need the full source but I'd still like to try having it installed to see if it solve my problem. What I tried : Uncompressing the kernel.org source archive in /usr/src/. DKMS still complaining. Manually updating the kernel source package with uupdate and the mainline source package like explained here. Did not succeed. Manually building the linux-source package following @roadmr and @elmicha instructions. I eventually succeeded to build it but DKMS still complained about the missing source. At last I noticed an error I did not catch in the first place while reinstalling the kernel headers. Appears the .deb I got may have been corrupted, downloading it again did the trick :) Alas, while DKMS agreed to compile the module i ran into the following error which appears to have already been reported. This issue isn't yet solved but I won't try to because of the following: in the end I decided to test the precise kernel version 3.2-rc6 through the xorg-edgers ppa which appears to be correctly patched: it works. Nevertheless, it might still be of some interest to know how to build the mainline linux-source package as the Ubuntu Kernel Team doesn't provide it. Not to mention that I learned a lot in the process ^^

    Read the article

  • Should I use an interface when methods are only similar?

    - by Joshua Harris
    I was posed with the idea of creating an object that checks if a point will collide with a line: public class PointAndLineSegmentCollisionDetector { public void Collides(Point p, LineSegment s) { // ... } } This made me think that if I decided to create a Box object, then I would need a PointAndBoxCollisionDetector and a LineSegmentAndBoxCollisionDetector. I might even realize that I should have a BoxAndBoxCollisionDetector and a LineSegmentAndLineSegmentCollisionDetector. And, when I add new objects that can collide I would need to add even more of these. But, they all have a Collides method, so everything I learned about abstraction is telling me, "Make an interface." public interface CollisionDetector { public void Collides(Spatial s1, Spatial s2); } But now I have a function that only detects some abstract class or interface that is used by Point, LineSegment, Box, etc.. So if I did this then each implementation would have to to a type check to make sure that the types are the appropriate type because the collision algorithm is different for each different type match up. Another solution could be this: public class CollisionDetector { public void Collides(Point p, LineSegment s) { ... } public void Collides(LineSegment s, Box b) { ... } public void Collides(Point p, Box b) { ... } // ... } But, this could end up being a huge class that seems unwieldy, although it would have simplicity in that it is only a bunch of Collide methods. This is similar to C#'s Convert class. Which is nice because it is large, but it is simple to understand how it works. This seems to be the better solution, but I thought I should open it for discussion as a wiki to get other opinions.

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Is dependency injection by hand a better alternative to composition and polymorphism?

    - by Drake Clarris
    First, I'm an entry level programmer; In fact, I'm finishing an A.S. degree with a final capstone project over the summer. In my new job, when there isn't some project for me to do (they're waiting to fill the team with more new hires), I've been given books to read and learn from while I wait - some textbooks, others not so much (like Code Complete). After going through these books, I've turned to the internet to learn as much as possible, and started learning about SOLID and DI (we talked some about Liskov's substitution principle, but not much else SOLID ideas). So as I've learned, I sat down to do to learn better, and began writing some code to utilize DI by hand (there are no DI frameworks on the development computers). Thing is, as I do it, I notice it feels familiar... and it seems like it is very much like work I've done in the past using composition of abstract classes using polymorphism. Am I missing a bigger picture here? Is there something about DI (at least by hand) that goes beyond that? I understand the possibility of having configurations not in code of some DI frameworks having some great benefits as far as changing things without having to recompile, but when doing it by hand, I'm not sure if it's any different than stated above... Some insight into this would be very helpful!

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • Is there an excuse for excessively short variable names?

    - by KChaloux
    This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?

    Read the article

  • Is there an excuse for short variable names?

    - by KChaloux
    This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?

    Read the article

  • Search ranking for important keywords has gone down drastically [duplicate]

    - by Vaivhav
    This question already has an answer here: How to diagnose a search engine ranking drop? 5 answers Firstly, we are a small entrepreneurial team of 3 persons and I am more like an amateur webmaster of the company's website as we cannot really afford a technical guy/department right now. A few weeks earlier, our website traffic and rankings for most keywords decreased overnight. I did a lot of reading henceforth and learned about Penguin 2.1 which people said is the reason for the drop. Something like this had never happened before. Now, I have gone through the entire Google webmaster help section. It says there that if a manual penalty is taken against us, we would notice a message in Manual Actions page. So far, we haven't received any notice from Google for web spam. Some SEO guys I contacted said they found spam links in our backlink profile. I do believe I had mistakenly purchased a cheap link/SEO scheme when I was yet very new to SEO. This was more than a year back but since then we have been legitimate. Moreover, how do I find out which is a spam link and which is not? Our content is all original, refreshing and the best you will find in our niche. We also have a blog but on a different domain (wordpress.com) from where we send out anchored links to our business website. Is this a good thing to do? Now, how should we proceed and recover our traffic/rankings. I tried searching in webmasters for a way to reach google and ask them why the traffic has decreased suddenly, but I couldn't find a contact form or something. Can someone please go through our website and help in making things more clear regarding the reason for the drop, along with a solution. Will really appreciate this as I can't get to figure this out and its taking a lot of time. Vaivhav

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >