Search Results

Search found 28486 results on 1140 pages for 'think floyd'.

Page 76/1140 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • 25 years old and considering a career change...possible? practical?

    - by mq330
    Hi all, I'm new to this site and new to programming as well. I've spent some time going through an intro cs book that uses python as the language of choice. I find the exercises interesting and engaging and I generally have had a favorable experience programming so far. I've gone through some of the basics with python like writing simple programs, basics of GUIs, manipulating strings, lists, defining functions, etc. And I've always loved technology. Although I've never done any real hardcore programming yet, I was inclined to building websites from a very young age but I never really developed my skills. Now, the thing is I'm 25, I have my bacholors in environmental studies and two masters degrees in urban planning and landscape architecture respectively. I know, it would be quite a departure to pursue a career in programming at this point. Currently, I'm working as a geographic information systems intern. I've taken some GIS classes and have a lot of experience with making maps, doing spatial analysis etc. So what I'm thinking is maybe I can learn some solid programming skills and apply these skills in the field of GIS. From what I've seen, .net languages are the norm in this arena. Could you perhaps provide some guidance to me in terms of what languages I should focus on or courses I should take at this point? What about for building web mapping applications? Also, I was thinking about getting a certificate in programming from a university extension program. Do you think it would be worth it? And furthermore, do you think potential employers would be interested in hiring someone like me (once I get a couple of languages down pretty well) as an intern or in an entry level position? I'll be living in the bay area so I feel that there should be decent opportunities even though I don't have a b.s. in cs.

    Read the article

  • opengl 3d point cloud render from x,y,z 2d array with texture

    - by user1733628
    Need some direction on 3d point cloud display using openGl in c++ (vs2008). I am brand new to openGl and trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with glBegin(GL_POLYGON); for(int i=0; i<1024; i++) { for(int j=0; j<512; j++) { glVertex3f(x[i][j], y[i][j], z[i][j]); } } glEnd(); Now this loads all the vertices in the buffer (i think) but from here I am not sure how to proceed. Or I am completely wrong here. Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display. I understand that this maybe a very basic opengl implementation for some but for me this is a huge learning curve. So any pointers, nudge or kick in the right direction will be appreciated.

    Read the article

  • How to implement isValid correctly?

    - by Songo
    I'm trying to provide a mechanism for validating my object like this: class SomeObject { private $_inputString; private $_errors=array(); public function __construct($inputString) { $this->_inputString = $inputString; } public function getErrors() { return $this->_errors; } public function isValid() { $isValid = preg_match("/Some regular expression here/", $this->_inputString); if($isValid==0){ $this->_errors[]= 'Error was found in the input'; } return $isValid==1; } } Then when I'm testing my code I'm doing it like this: $obj = new SomeObject('an INVALID input string'); $isValid = $obj->isValid(); $errors=$obj->getErrors(); $this->assertFalse($isValid); $this->assertNotEmpty($errors); Now the test passes correctly, but I noticed a design problem here. What if the user called $obj->getErrors() before calling $obj->isValid()? The test will fail because the user has to validate the object first before checking the error resulting from validation. I think this way the user depends on a sequence of action to work properly which I think is a bad thing because it exposes the internal behaviour of the class. How do I solve this problem? Should I tell the user explicitly to validate first? Where do I mention that? Should I change the way I validate? Is there a better solution for this? UPDATE: I'm still developing the class so changes are easy and renaming functions and refactoring them is possible.

    Read the article

  • webgame engine how does it works

    - by TWCrap
    Hy all, first off all, don't yell that i shouldn't start with it, i just want to know how that works... The thing is, how does the engine of an webgame works. A game like tribalwars, grepolis and forge of empires. How does that keeping alive work. I mean, a user is building an building, and quit the browser... The building is build even when the session of the user is expired. but the points of the user is updated when the building is finished... So how does that works. What do you guys think? do they have some kind of cronjob that is fired every second, and that walks throug the database, and search for finished buildings, and update's the stuff? or do you guys think that they do it difrent?!? I hope that i was clear. -NOTE- i don't need anny code, i'm just intrested in the progress behind the game... Greetingz Marc

    Read the article

  • Should I always encapsulate an internal data structure entirely?

    - by Prog
    Please consider this class: class ClassA{ private Thing[] things; // stores data // stuff omitted public Thing[] getThings(){ return things; } } This class exposes the array it uses to store data, to any client code interested. I did this in an app I'm working on. I had a ChordProgression class that stores a sequence of Chords (and does some other things). It had a Chord[] getChords() method that returned the array of chords. When the data structure had to change (from an array to an ArrayList), all client code broke. This made me think - maybe the following approach is better: class ClassA{ private Thing[] things; // stores data // stuff omitted public Thing[] getThing(int index){ return things[index]; } public int getDataSize(){ return things.length; } public void setThing(int index, Thing thing){ things[index] = thing; } } Instead of exposing the data structure itself, all of the operations offered by the data structure are now offered directly by the class enclosing it, using public methods that delegate to the data structure. When the data structure changes, only these methods have to change - but after they do, all client code still works. Note that collections more complex than arrays might require the enclosing class to implement even more than three methods just to access the internal data structure. Is this approach common? What do you think of this? What downsides does it have other? Is it reasonable to have the enclosing class implement at least three public methods just to delegate to the inner data structure?

    Read the article

  • How do you tell if advice from a senior developer is bad?

    - by learnjourney
    Recently, I started my first job as a junior developer and I have a more senior developer in charge of mentoring me in this small company. However, there are several times when he would give me advice on things that I just couldn't agree with (it goes against what I learned in several good books on the topic written by the experts, questions I asked on some Q&A sites also agree with me) and given our busy schedule, we probably have no time for long debates. So far, I have been trying to avoid the issue by listening to him, raising a counterpoint based on what I've learned as current good practices. He raises his original point again (most of the time he will say best practice, more maintainable but just didn't go further), I take a note (since he didn't raise a new point to counter my counterpoint), think about it and research at home, but don't make any changes (I'm still not convinced). But recently, he approached me yet again, saw my code and asked me why haven't I changed it to his suggestion. This is the 3rd time in 2--3 weeks. As a junior developer, I know that I should respect him, but at the same time I just can't agree with some of his advice. Yet I'm being pressured to make changes that I think will make the project worse. Of course as an inexperienced developer, I could be wrong and his way might be better, it may be 1 of those exception cases. My question is: what can I do to better judge if a senior developer's advice is good, bad or maybe it's (good but outdated in today context)? And if it is bad/outdated, what tactics can I use to not implement it his way despite his 'pressures' while maintaining the fact that I respect him as a senior?

    Read the article

  • How do I configure an Intel HD Graphics 4000?

    - by derabbink
    First off, please note that last night I already posted this question to a launchpad mailing list, so this could be considered a cross post. However, I think this is a better place to ask the same question The question: How can I configure my Ubuntu 12.04, with upgraded kernel (3.6), to use the Intel HD Graphics 4000 adapter? (Intel HD 4000 is the standard of 3rd gen Intel Core i7 (Ivy Bridge) graphics adapter) Some output: $ glxinfo name of display: :0 X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 154 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 $ cat /etc/X11/xorg.conf this is probably the farthest from what it should be Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Module" Load "glx" EndSection $ lspci I only listed the line I think are relevant. If you want more info in order to help me, please comment :) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 16:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] 16:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Turks HDMI Audio [Radeon HD 6000 Series]

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • Unable to start Ubuntu 12.04. The system is running in low-graphics

    - by kaleidoscpicsoul
    I am a newbie to Ubuntu. I installed Ubuntu 12.04 using a USB Stick and it was running all fine for a few weeks and this error popped up. According to one of my friend, the best way was to re-install Ubuntu. Being from a non unix background i thought the same too and after the second install it happened again, but only this time was much quicker, in 3 days. I don't want to re-install Ubuntu every time this happens. I am a complete newbie to Linux which means that i am really bad at using terminal. I know there are other people who fixed this issue using this very same forum, but unfortunately the answers provided are too complex for me to understand. Please let me know how to do this. Things I want to let you know: I would need help step by step if that is alright with you. After i get the error i get the options and i click exit to console login I get the following message in a black screen (which i think is a command line sort of thing): * Stopping save kernel messages [OK] apache2: Could not reliably determine the server's fully qualified domain name,using 127.0.1.1 for ServerName [OK] * Starting web server apache2 and a blinking cursor. So basically it looks like a dead end for my non unixy eye. And one final thing is before this issue had happened i had tried configuring Python to Apache2. For that i had uninstalled and installed LAMP server several time and edited the configuration files too. I don't know if this really is a concern, but I don't know.. I have a USB with Ubuntu 12.04 in it so i can install it anytime. (But i want to know what the issue is rather than running away) . I migrated to Ubuntu from Windows and i have no plans to go back. I think that's from my side. Please let me know if there are any questions.

    Read the article

  • Help with design structure choice: Using classes or library of functions

    - by roverred
    So I have GUI Class that will call another class called ImageProcessor that contains a bunch functions that will perform image processing algorithms like edgeDetection, gaussianblur, contourfinding, contour map generations, etc. The GUI passes an image to ImageProcessor, which performs one of those algorithm on it and it returns the image back to the GUI to display. So essentially ImageProcessor is a library of independent image processing functions right now. It is called in the GUI like so Image image = ImageProcessor.EdgeDetection(oldImage); Some of the algorithms procedures require many functions, and some can be done in a single function or even one line. All these functions for the algorithms jam packed into ImageProcessor can be pretty messy, and ImageProcessor doesn't sound it should be a library. So I was thinking about making every algorithm be a class with a shared interface say IAlgorithm. Then I pass the IAlgorithm interface from the GUI to the ImageProcessor. public interface IAlgorithm{ public Image Process(); } public class ImageProcessor{ public Image Process(IAlgorithm TheAlgorithm){ return IAlgorithm.Process(); } } Calling in the GUI like so Image image = ImageProcessor.Process(new EdgeDetection(oldImage)); I think it makes sense in an object point of view, but the problem is I'll end up with some classes that are just one function. What do you think is a better design, or are they both crap and you have a much better idea? Thanks!

    Read the article

  • Can't install Ubuntu in Windows 8

    - by user171635
    I’ve been trying to install Ubuntu 13.04 64-bit edition on an ASUS (K53Z) laptop. I have Windows 8 64-bit installed in a non UEFI mode (I think since it starts-up with the Windows logo and I don’t have the UEFI settings). This laptop had installed Windows 7 and when I upgraded it I didn’t knew about the UEFI advantages. I tried several times to install Ubuntu from a USB device and it loads the logo and then I can’t go further in the installation. I thought it was the version of Ubuntu and tried to install Fedora (even if I personally prefer Ubuntu). I had the same problem: Fedora’s logo appears and it gets stuck. I tried also to boot from different USB devices and didn’t work either. My Bios has EFI options to boot but they were not enabled. So I tried to enable them to boot the USB in UEFI mode. A menu shows up with the options of install Ubuntu and try Ubuntu. If I click the Install or try option, I get a black screen and I can’t go further with the install (which I think is normal since I don’t have Windows 8 in EFI mode). My hypothesis is that the Bios isn’t letting Ubuntu write or read from my SSD, because the activity LED in the USB memory is on when it’s loading the installation files. Once the files are ready and the Ubuntu logo is loaded I don’t see a LED activity on neither the SSD or the USB. Thanks If I missed data you can ask me.

    Read the article

  • SSD and HDD have window 7 recovery partition. Can I delete one to make room for ubuntu?

    - by Brian Ecker
    I'm trying to install ubuntu right now, and I've run into a problem. I have Windows 7 installed on my SSD, and I want to install ubuntu on my HDD, but I already have three partitions on my HDD. The partitions are two Recovery Partitions and one data partition. What I don't understand is why my data drive(the HDD) has recovery partitions for Windows 7? The same recovery partitions(or atleast I think they are the same. Same sizes, same names, same order) are on the SSD with the Windows 7 install. Can I safely delete the recovery partitions on the HDD? My other option, I think, is to put the boot partition for ubuntu on the SSD where I only have three partitions. Then I can put the other three logical partitions for ubuntu in an extended partition on the HDD. Can I do that, put the boot partition on one drive and the other partitions on another? Here is a picture of the partitions and I have circled the one I would like to delete to make room. http://imgur.com/XOpJQ

    Read the article

  • What's the best way to use requestAnimationFrame and fixed frame rates

    - by m90
    I recently got into using the HTML5-requestAnimationFrame-API a lot on animation-heavy websites, especially after seeing the Jank Busters talk. This seems to work pretty well and really improve performance in many cases. Yet one question still persists for me: When wanting to use an animation that is NOT entirely calculated (think spritesheets for example) you will have to aim for a fixed frame rate. Of course one could go back to use setInterval again, but maybe there are other ways to tackle this. The two ways I could think of using requestAnimationFrame with a fixed frame rate are: var fps = 25; //frames per second function animate(){ //actual drawing goes here setTimeout(function(){ requestAnimationFrame(animate); }, 1000 / fps) } animate(); or var fps = 25; //frames per second var lastExecution = new Date().getTime(); function animate(){ var now = new Date().getTime(); if ((now - lastExecution) > (1000 / fps)){ //do actual drawing lastExecution = new Date().getTime(); } requestAnimationFrame(animate); } animate(); Personally, I'd opt for the second option (the first one feels like cheating), yet it seems to be more buggy in certain situations. Is this approach really worth it (especially at low frame rates like 12.5)? Are there things to be improved? Is there another way to tackle this?

    Read the article

  • Reliance on Outlook (been a looong time, I know)

    - by AndyScott
    Do you feel that your development group too reliant on Outlook? Have you reached a point that you have to search your email for pertinent information when asked? What are you using? I realized things had gotten out of hand a couple weeks ago over a weekend. I was at my in-laws house (in the country, no PC/laptop, no internet connection; and I get an email on my phone that I needed to reply to, but I couldn't send without deleting items from my inbox/sent items/etc. Now mind you, I have rules set up to move stuff into folders, and files more than a month old are automatically moved to the PST; but generally don't manually move items to a PST until I have had a chance to 'work' the item. Please don't bother mocking my process, it's just the way I work. That being said, it was a frustrating process of 'I need all this information, what can I afford to lose'. I work on an International project (think lots of customers), and conversations in 9 or 10 different directions about 10-20 different things are not abnormal for a given day. I have found myself looking data up in Outlook because that's where it is. I think that I have reached the point now, where I don't feel that Outlook is up to the task of organizing the data that it contains.   When you have that many emails (200 or so a day), information seems to get lost at times, and I find that Outlook's search capabilities are lacking. Additionally, I find that any sort of organizational 'system' of sorting emails that can cover multiple topics is a lost cause. But at the same time, the old process of taking the information that I got from emails and moving it into another 'notes' type of program has proved to be too time consuming. Anyone out there have some better type of system? (Comments about the capacity of my brain, and it's ability to recall information not needed.)

    Read the article

  • Agressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • Aggressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • Should i worry about company's Reputation before joining it...

    - by Shekhar_Pro
    My question is particularly in context of this Question asked recently on this site. I am a fresh graduate and would soon be applying for jobs. But the above mentioned question has raised concerns for me. Should i care about the company i am going to join. Specially since for me Yahoo is a very reputed corporation and i duly respect it, in fact i would be proud to work for it if given the chance. And the OP of the question was a Senior Software Engineer, i don't think something could have gone so wrong that he feels shame for have worked there. Well i won't like to have this situation in my career. Until now for me the criteria for approaching a company were like - it should have Good, knowledgeable and Experienced team, it should provide me with tasks that will always challenge me to think and learn more, etc. After all a task is task and you do always learn something from it. But in view of mentioned question should i also consider other things... Please give your answers and most importantly an opinion from your own experience.

    Read the article

  • Value my C++ knowledge

    - by PirateOwh
    I have only followed antiRTFM tutorials and read 2 books So, I'll list the things I know better : basic input output and all the variables : integers ( signed unsigned ), float, double, char arrays if, for, while, switch functions, and passing variables to functions and return type thing classes and the concept of oop with separating declaration and definition in the header and in the source pointers so this and some more i think is all i know of C++.. But, i need some exercises to test my knowledge because i want to move on to the library SDL, so I don't know if i should feel ready or not to move on to something totally different.. I feel I should know the basics for good at least. So the question is : How can i value my c++ knowledge? Is there any online tests? Is there any GDD ( Game Design Document ) for free to use and see if i can manage to do it so "i'd pass" ? ( I'm saying GDD since ill move on to SDL and try to make my own game ) When should I move to SDL? What are ALL the things I should "master" ( master is a big word to say.. but so you understand what i mean ) before moving on ? Please I'm really in need of expert advice. I think my question is detailed so i hope you understand what i mean and can give me a good reply. Thanks for the help!

    Read the article

  • Is this bad practice?

    - by gekod
    I just wanted to ask for your opinion on a situation that occurs sometimes and which I don't know what would be the most elegant way to solve it. Here it goes: We have module A which reads an entry from a database and sends a request to module B containing ONLY the information from the entry module B would need to accomplish it's job (to keep modularity I just give it the information it needs - module B has nothing to do with the rest of the information from the read DB entry). Now after finishing it's job, module B has to reply to a module C if it succeeded or failed. To do this module B replies with the information it has gotten from module A and some variable meaning success or fail. Now here comes the problem: module C needs to find that entry again BUT the information it has gotten from module B is not enough to uniquely find the exact same entry again. I don't think that module A giving more information to module B which it doesn't need to do it's job but which it could then give back to module C would be a good practice because this would mean giving some module information it doesn't really need. What do you think?

    Read the article

  • Which are the best ways to organize view hierarchies in GUI interfaces?

    - by none
    I'm currently trying to figure out the best techniques for organizing GUI view hierarchies, that is dividing a window into several panels which are in turn divided into other components. I've given a look to the Composite Design Pattern, but I don't know if I can find better alternatives, so I'd appreciate to know if using the Composite is a good idea, or it would be better looking for some other techniques. I'm currently developing in Java Swing, but I don't think that the framework or the language can have a great impact on this. Any help will be appreciated. ---------EDIT------------ I was currently developing a frame containing three labels, one button and a text field. At the button pressed, the content inside the text field would be searched, and the results written inside the three labels. One of my typical structure would be the following: MainWindow | Main panel | Panel with text field and labels. | Panel with search button Now, as the title explains, I was looking for a suitable way of organizing both the MainPanel and the other two panels. But here came problems, since I'm not sure whether organizing them like attributes or storing inside some data structure (i.e. LinkedList or something like this). Anyway, I don't really think that both my solution are really good, so I'm wondering if there are really better approaches for facing this kind of problems. Hope it helps

    Read the article

  • How do I convert mouse co-ordinates in Slick2d java?

    - by Trycon
    I'm really new in Java and I really want to how do I convert the mouse clicks to co-ordinates in game. My game moves its images so that the camera could stay with the character. I follwed thenewboston tutorials. I have been modifying new codes for smoother gameplay. I have been searching the web for tutorials. This is one of the codes: PosGameX=MouseX+0; PosGameY=MouseY+0; I have not try this code but, I really think this would not work. The website I have visited, I think, is not good for coding. My gameplay is that when the mouse clicks on a position. It would try to get the co-ordinates(Mouse) and convert it to game co-ordinates. And I really want to know how do I make my mouse clicks to game co-ordinates? FOR MORE INFO: Searches: How Do I translate game co-ordinates? How Do I translate mouse to game co-ordinates? AND PLEASE! Do not give me algebra. I have really forgotten those.

    Read the article

  • How come there is still so much programming work?

    - by jd_505
    First I'd like to say that I am not pretty sure that this question will meet the guidelines. I think it can goes under the "Freelancing and business concerns" bullet, but I am not sure. Anyway, I will give it a shot. I wonder how the programming jobs hasn't yet "dried" because of the software evolution, for example, I am a developer myself, which means that I do care about software (I mean I am not of the type of guys that needs a computer mainly to just browse the Internet), and still I wouldn't mind if I never receive any more updates on my Ubuntu machine. I find that it provides everything I need, and while the updates provide various bug fixes/improvements, I wouldn't mind using it with it's current state for the rest of my life, for 2 years of Ubuntu usage I have never bumped at a serious bug/problem. Another example is Windows, almost half of it's users still use XP, which is practically ancient, yet they find it satisfying all their needs (and I agreee with them). I could go with many more examples, but by now you are understanding my point and my question. While new "trends" appears all of the time (like a new mobile OS) which runs on new platforms and requires some fresh development work, still the majority of the software effort goes in to what I consider as "completed projects", or at least a state of a project which is enough to be considered as completed. Do you have an explanation ? I can't think of the right tags for this question, please edit it the way you find it to be most appropriate. Thanks.

    Read the article

  • Windows Phone 7 v. Windows 8 Metro &ldquo;Same but Different&rdquo;

    - by ryanabr
    I have been doing development on both the Windows Phone 7 and Windows 8 Metro style applications over the past month and have really been enjoying doing both. What is great is that Silverlight is used for both development platforms. What is frustrating is the "Same but Different" nature of both platforms. Many similar services and ways of doing things are available on both platforms, but the objects, namespaces, and ways of handling certain cases are different. I almost had a heart attack when I thought that XmlDocument had been removed from the new WinRT. I was relived (but a little annoyed)  when I found out that it had shifted from the "System.Xml" namespace to the "Windows.Data.Xml.Dom" namespace. In my opinion this is worse than deprecating and reintroducing it since there isn't the lead time to know that the change is coming, maker changes and adjust. I also think the breaks the compatibility that is advertised between the WinRT and .NET framework from a programming perspective, as the code base will have to be physically different if compiled for one platform versus the other. Which brings up another issue, the need for separate DLLs with for the different platforms that contain the same C# code behind them which seems like the beginning of a code maintenance headache. Historically, I have kept source files "co-located" with the projects that they are compiled into. After doing some research, I think I will end up keeping "common" files that need to be compiled in to DLLs for the different platforms in a seperate location in TFS, not directly included in any one Visual Studio project, but added as links in the project that would get compiled into the windows 7 phone, or Windows 8. This will work fine, except for the case where dependencies don't line up for each platform as described above, but will work fine for base classes that do the raw work at the most basic programming level.

    Read the article

  • How many tasks to plan beforehand [closed]

    - by no__seriously
    As for my daily routine. Every morning when I come to work, I look at the items of my todo-list inbox (noted from the previous day). For each task I think about on which day I should get started and then group them accordingly. Once that's finished, I get started with my actual schedule for the day. Now, this pre-planning for each task (which could be concerning user interface to compiler programming) is mostly pretty sketchy. Serious thoughts about design and implementation comes when the task is about to be tackled. This approach works for me and I can't really complain. But I'm wondering. Since I'm personally most productive during the morning, would it make sense to already go into a deeper level of planning right away for each task? Or is that unproductive and would rather confuse than clarify? I think the latter. How do you handle your task management for each task / project and how far do you go with planning before even getting started with that item?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >