Search Results

Search found 19966 results on 799 pages for 'wild thing'.

Page 533/799 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • How can we unify business goals and technical goals?

    - by BAM
    Some background I work at a small startup: 4 devs, 1 designer, and 2 non-technical co-founders, one who provides funding, and the other who handles day-to-day management and sales. Our company produces mobile apps for target industries, and we've gotten a lot of lucky breaks lately. The outlook is good, and we're confident we can make this thing work. One reason is our product development team. Everyone on the team is passionate, driven, and has a great sense of what makes an awesome product. As a result, we've built some beautiful applications that we're all proud of. The other reason is the co-founders. Both have a brilliant business sense (one actually founded a multi-million dollar company already), and they have close ties in many of the industries we're trying to penetrate. Consequently, they've brought in some great business and continue to keep jobs in the pipeline. The problem The problem we can't seem to shake is how to bring these two awesome advantages together. On the business side, there is a huge pressure to deliver as fast as possible as much as possible, whereas on the development side there is pressure to take your time, come up with the right solution, and pay attention to all the details. Lately these two sides have been butting heads a lot. Developers are demanding quality while managers are demanding quantity. How can we handle this? Both sides are correct. We can't survive as a company if we build terrible applications, but we also can't survive if we don't sell enough. So how should we go about making compromises? Things we've done with little or no success: Work more (well, it did result in better quality and faster delivery, but the dev team has never been more stressed out before) Charge more (as a startup, we don't yet have the credibility to justify higher prices, so no one is willing to pay) Extend deadlines (if we charge the same, but take longer, we'll end up losing money) Things we've done with some success: Sacrifice pay to cut costs (everyone, from devs to management, is paid less than they could be making elsewhere. In return, however, we all have creative input and more flexibility and freedom, a typical startup trade off) Standardize project management (we recently started adhering to agile/scrum principles so we can base deadlines on actual velocity, not just arbitrary guesses) Hire more people (we used to have 2 developers and no designers, which really limited our bandwidth. However, as a startup we can only afford to hire a few extra people.) Is there anything we're missing or doing wrong? How is this handled at successful companies? Thanks in advance for any feedback :)

    Read the article

  • ATI Radeon HD7000 Series (Laptop) - Switch Mode Between ATI & Intel Integrated GPU. Stuck on Boot Screen On Intel GPU Selection Mode

    - by Monkey Drone
    Laptop Specs: HP Pavilion G6-2020SE GPUs 1) ATI HD7000 Series 2) Intel Integrated OS Installed: x) Ubuntu 12.04 (64 bit) i) ATI Graphics Card Drivers Installed From AMD website. Note: Graphics Card Drivers are Working Fine in 3D Mode. It runs a little Hot as it should since its a GPU. Observation) AMD Catalyst Control Centre Lets me Choose If I want to run the system in HIGH-END (ATI GPU) OR Intel Integrated (Better battery life) While I am on High End GPU Choice, Ubuntu works fine. Problem) But when I switch to Intel Mode in the AMD CCC and reboot the Machine. Ubuntu goes into 'Low Graphics Mode'. The problem is not that it goes into low graphics mode, it is completely expected since I am no longer using the ATI GPU but the integrated Intel GPU. Problem starts with the 'Selection' of the options. During that screen, I have no mouse on the screen (even tried plugging in an external USB mouse) & No Keyboard functionality. Thus I am left completely disabled to choose any option and load into Ubuntu. The Only thing I can do is switch to a terminal and enable ATI GPU through command-line and Ubuntu works Fine again. Is it a bug that there is no mouse/keyboard available to me during the startup of Ubuntu when its launched in Low-Graphics Mode? Any suggestions on how to pass through that? My palms are sweating as I write this down because the ATI GPU is really heating up my laptop. I dont want to boot into Windows or keep it around any longer than necessary. Please advise with help and directions. Sincerely, MonkeyD Edit1: The Answer by Celso has helped me switch to Intel, thus giving me sufficient battery power. Kudos to Celso. Now I can at least use my laptop for the time being without having it burn hair off my skin. I am still looking for answer to my original question of, 'why is lightdm not working properly when I switch to Intel GPU using ATI HD7000 series official drivers provided by AMD'.

    Read the article

  • Play Framework Plugin for NetBeans IDE (Part 2)

    - by Geertjan
    After I published part 1 of this series, the first external contribution (i.e., not by me) to the NetBeans plugin for Play Framework 2 was committed today. Yann D'Isanto added support for creating new Play projects: That completely solves a problem I was working on, in a different way altogether. I was working on creating a new wizard that would call "play new" on the command line and pass into the command line the entered name and application type (1 for Java and 2 for Scala). However, Yann's solution is better, at least in the sense in that it works, as opposed to mine which didn't, because of problems I continually had with the command line, since one needs to press Enter multiple times on the Play command line when creating new projects, which I wasn't able to simulate in my new wizard. Yann's approach is simply to follow the approach taken in the Project Type Module Tutorial, which explains how to register a project sample in the IDE. I was inspired by Yann's contribution, especially when he mentioned that one needs to build Play projects on the command line. So, I added a new menu item on the right-click of a project for building Play projects, which simply passes "play compile" to the command line for the current project: Via the IDE's main menu bar, you can also Build and Run the application, though the code for the Clean function needs to be added still, which would be a cool thing for anyone out there to add, by using all the existing code and then passing "play clean compile" to the command line. Something else that Yann added is an Options Window extension, thanks to the Options Window Module Tutorial, for registering the Play installation, which is a step forward from my hard coded solution. I changed things slightly so that, when Build or Run are selected, without a Play installation being defined, the Options window opens, displaying the tab that Yann created, shown below. Notice that there's no Browse button, which would be a simple next step for anyone else to contribute. A small tip is to use the FileChooserBuilder from the NetBeans IDE APIs when working on the Browse button: Looking forward to more contributions to the Play Framework 2 plugin for NetBeans IDE. Just leave a message here with your ideas, with your java.net name, and then I'll add you to the project on java.net, where I very much look forward to your contributions: http://java.net/projects/nbplay/sources/nbplay

    Read the article

  • Can't access some websites using Ubuntu 13.10

    - by Adame Doe
    Something's wrong with Ubuntu. Since I've upgraded to 13.10, I can't access some websites for no apparent reason. I've tried everything imaginable to solve this problem : Made sure that MTUs are the same, Disabled IPv6 in both the network manager and used browsers, Deactivated my network keys, DMZed my computer, Used other DNS like Google and OpenDNS, Checked that no firewall was running my computer ... And it's the same result. I even tried to reinstall Ubuntu a couple of times, but no luck. The most annoying thing about it is I can't access wordpress.org! So, there's no way it could be an ISP restriction of some kind. When I use a VPN, I can access pretty much anything. I'm really frustrated because I have to use wordpress.org very often. Any clue? ifconfig adame@adame-ws:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:3d:b0:7c inet addr:10.42.0.1 Bcast:10.42.0.255 Mask:255.255.255.0 inet6 addr: fe80::226:18ff:fe3d:b07c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8024 errors:0 dropped:0 overruns:0 frame:0 TX packets:7966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:684480 (684.4 KB) TX bytes:616608 (616.6 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:8222 errors:0 dropped:0 overruns:0 frame:0 TX packets:8222 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:568269 (568.2 KB) TX bytes:568269 (568.2 KB) wlan0 Link encap:Ethernet HWaddr 00:19:70:40:85:eb inet addr:192.168.2.3 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::219:70ff:fe40:85eb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1464 Metric:1 RX packets:123705 errors:0 dropped:0 overruns:0 frame:0 TX packets:98141 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:94963545 (94.9 MB) TX bytes:10387470 (10.3 MB) /etc/hosts 127.0.0.1 localhost 127.0.1.1 adame-ws ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters tracepath wordpress.org 1: adame-ws.local 0.092ms pmtu 1500 1: 192.168.2.1 1.300ms asymm 2 1: 192.168.2.1 1.060ms asymm 2 2: no reply 3: no reply 4: no reply 5: no reply 6: no reply 7: no reply 8: no reply ... keep on going like that ping wordpress.org adame@adame-ws:~$ ping wordpress.org PING wordpress.org (66.155.40.250) 56(84) bytes of data. --- wordpress.org ping statistics --- 10 packets transmitted, 0 received, 100% packet loss, time 9071ms

    Read the article

  • Silverlight and WCF caching

    - by subodhnpushpak
    There are scenarios where Silverlight client calls WCF (or REST) service for data. Now, if the data is cached on the WCF layer, the calls can take considerable resources at the server if NOT cached. Keeping that in mind along with the fact that cache is an cross-cutting aspect, and therefore it should be as easy as possible to put Cache wherever required. The good thing about the solution is that it caches based on the inputs. The input can be basic type of any complex type. If input changes the data is fetched and then cached for further used. If same input is provided again, data id fetched from the cache. The cache logic itself is implemented as PostSharp aspect, and it is as easy as putting an attribute over service call to switch on cache. Notice how clean the code is:        [OperationContract]       [CacheOnArgs(typeof(int))] // based on actual value of cache        public string DoWork(int value)        {            return string.Format("You entered: {0} @ cached time {1}", value, System.DateTime.Now);        } The cache is implemented as POST Sharp as below 1: public override void OnInvocation(MethodInvocationEventArgs eventArgs) 2: { 3: try 4: { 5: object value = new object(); 6: object[] args = eventArgs.GetArgumentArray(); 7: if (args != null || args.Count() > 0) 8: { 9:   10: string key = string.Format("{0}_{1}", eventArgs.Method.Name, XMLUtility<object>.GetDataContractXml(args[0], null));// Compute the cache key (details omitted). 11:   12: 13: value = GetFromCache(key); 14: if (value == null) 15: { 16: eventArgs.Proceed(); 17: value = XMLUtility<object>.GetDataContractXml(eventArgs.ReturnValue, null); 18: value = eventArgs.ReturnValue; 19: AddToCache(key, value); 20: return; 21: } 22:   23:   24: Log(string.Format("Data returned from Cache {0}",value)); 25: eventArgs.ReturnValue = value; 26: } 27: } 28: catch (Exception ex) 29: { 30: //ApplicationLogger.LogException(ex.Message, Source.UtilityService); 31: } 32: } 33:   34: private object GetFromCache(string inputKey) { if (ServerConfig.CachingEnabled) { return WCFCache.Current[inputKey]; } return null; }private void AddToCache(string inputKey,object outputValue) 35: { 36: if (ServerConfig.CachingEnabled) 37: { 38: if (WCFCache.Current.CachedItemsNumber < ServerConfig.NumberOfCachedItems) 39: { 40: if (ServerConfig.SlidingExpirationTime <= 0 || ServerConfig.SlidingExpirationTime == int.MaxValue) 41: { 42: WCFCache.Current[inputKey] = outputValue; 43: } 44: else 45: { 46: WCFCache.Current.Insert(inputKey, outputValue, new TimeSpan(0, 0, ServerConfig.SlidingExpirationTime), true); 47:   48: // _bw.DoWork += bw_DoWork; 49: //string arg = string.Format("{0}|{1}", inputKey,outputValue); 50: //_bw.RunWorkerAsync(inputKey ); 51: } 52: } 53: } 54: }     The cache class can be extended to support Velocity / memcahe / Nache. the attribute can be used over REST services as well. Hope the above helps. Here is the code base for the same.   Please do provide your inputs / comments.

    Read the article

  • High-Powered Sites for low Cost

    - by HighAltitudeCoder
    Ahh, I am experiencing the intimidation of my very first post - visible by the whole world. Ok, here goes.   This first post is nothing exceptional.  It is simply a recommendation based (fittingly, I suppose) upon the job search you may be gearing up for.  I find myself in this very situation right now.  And, I will take my own recommendation after posting this entry. Job-Seekers: To the left you will notice two links under "Recommended Learning".  I have found these links to be invaluable when it comes to re-tooling, re-familiarizing, or otherwise resharping my skills when looking for that next job. Often, you will find job-postings with the text, usually posted after a laborious list of qualifications indicating the company's desire to hire candidates who know what they are doing: "...Looking for a candidate who can hit the ground running...".  The interesting thing about this post to me is I've encountered many individuals who, after speaking and working with them for some time, I've realized are perfectly capable of hitting the ground running - and FAST.  But what if they speed off in the wrong direction? The next time you spearhead a major task in your job, ask yourself: Am I headed in the wrong direction?  There are many ways to do this.  In fact, I've found in this new field there are more tempting ways to steer your project in the wrong direction than there are good ones.  I don't want to suggest that every one of my posts will fall into the "right direction" category, however I do think a healthy dose of introspection of the pros and cons will always be beneficial before you set off. That said, allow me to expound on the previously mentioned links. These web sites are invaluable.  They demonstrate the capabilities of existing as well as new and upcoming tools available in several IDE's.  I've viewed many tutorials in LearnVisualStudio.NET, and only one or two so far in TrainingSpot, however I've been delighted in their simplicity and straightforward approach to proper usage of the particular tool or concept being discussed.  They have not (so far in my experience) demonstrated ways in which to use the tools that become cumbersome, impractical, or error-prone. Each website has step-by-step videos that can be paused, replayed, and most importantly, they are done in real time.  As the author is typing, the viewer gets to experience the coding experience from a first-person perspective, including syntax errors, unexpected behaviors, IDE setup idiosyncracies, everything.  A subtle value I've gained from these videos is that a certain degree of confusion and introspection is normal when working with new tools and exploring new paths.  They (as well as your own experience) are not to be feared, but enjoyed.  I highly recommend them. Good work, guys!

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • Correct permissions for /var/www and wordpress

    - by dpbklyn
    Hello and thank you in advance! I am relatively new to ubuntu, so please excuse the newbie-ness of this question... I have set up a LAMP server (ubuntu server 11.10) and I have access via SSH and to the "it works" page from a web browser from inside my network (via ip address) and from outside using dyndns. I have a couple of projects in development with some outside developers and I want to use this server as a development server for testing and for client approvals. We have some Wordpress projects that sit in subdirectories in /var/www/wordpress1 /var/www/wordpress2, etc. I cannot access these sub directories from a browser in order to set up WP--or (I assume) to see the content on a browser. I get a 403 Forbidden error on my browser. I assume that this is a permissions problem. Can you please tell me the proper settings for the permissions to: 1) Allow the developers and me to read/write. 2) to allow WP set up and do its thing 3) Allow visitors to access the site(s) via the web. I should also mention that the subfolder are actually simlinks to folder on another internal hdd--I don't think this will make a difference, but I thought I should disclose. Since I am a newbie to ubuntu, step-by-step directions are greatly appreciated! Thank you for taking the time! dp total 12 drwxr-xr-x 2 root root 4096 2012-07-12 10:55 . drwxr-xr-x 13 root root 4096 2012-07-11 20:02 .. lrwxrwxrwx 1 root root 43 2012-07-11 20:45 admin_media -> /root/django_src/django/contrib/admin/media -rw-r--r-- 1 root root 177 2012-07-11 17:50 index.html lrwxrwxrwx 1 root root 14 2012-07-11 20:42 media -> /hdd/web/media lrwxrwxrwx 1 root root 18 2012-07-12 10:55 wordpress -> /hdd/web/wordpress Here is the result of using chown -R www-data:www-data /var/www total 12 drwxr-xr-x 2 www-data www-data 4096 2012-07-12 10:55 . drwxr-xr-x 13 root root 4096 2012-07-11 20:02 .. lrwxrwxrwx 1 www-data www-data 43 2012-07-11 20:45 admin_media -> /root/django_src/django/contrib/admin/media -rw-r--r-- 1 www-data www-data 177 2012-07-11 17:50 index.html lrwxrwxrwx 1 www-data www-data 14 2012-07-11 20:42 media -> /hdd/web/media lrwxrwxrwx 1 www-data www-data 18 2012-07-12 10:55 wordpress -> /hdd/web/wordpress I am still unable to access via browser...

    Read the article

  • Practices for domain models in Javascript (with frameworks)

    - by AndyBursh
    This is a question I've to-and-fro'd with for a while, and searched for and found nothing on: what're the accepted practices surrounding duplicating domain models in Javascript for a web application, when using a framework like Backbone or Knockout? Given a web application of a non-trivial size with a set of domain models on the server side, should we duplicate these models in the web application (see the example at the bottom)? Or should we use the dynamic nature to load these models from the server? To my mind, the arguments for duplicating the models are in easing validation of fields, ensuring that fields that expected to be present are in fact present etc. My approach is to treat the client-side code like an almost separate application, doing trivial things itself and only relying on the server for data and complex operations (which require data the client-side doesn't have). I think treating the client-side code like this is akin to separation between entities from an ORM and the models used with the view in the UI layer: they may have the same fields and relate to the same domain concept, but they're distinct things. On the other hand, it seems to me that duplicating these models on the server side is a clear violation of DRY and likely to lead to differing results on the client- and server-side (where one piece gets updated but the other doesn't). To avoid this violation of DRY we can simply use Javascripts dynamism to get the field names and data from the server as and when they're neeed. So: are there any accepted guidelines around when (and when not) to repeat yourself in these situations? Or this a purely subjective thing, based on the project and developer(s)? Example Server-side model class M { int A DateTime B int C int D = (A*C) double SomeComplexCalculation = ServiceLayer.Call(); } Client-side model function M(){ this.A = ko.observable(); this.B = ko.observable(); this.C = ko.observable(); this.D = function() { return A() * C(); } this.SomeComplexCalculation = ko.observalbe(); return this; }l M.GetComplexValue = function(){ this.SomeComplexCalculation(Ajax.CallBackToServer()); }; I realise this question is quite similar to this one, but I think this is more about almost wholly untying the web application from the server, where that question is about doing this only in the case of complex calculation.

    Read the article

  • Damaged external NTFS hard disk

    - by Thanos
    A few days ago, I used my external hard disk (a 2TB Seagate) in order to transfer some files on Windows Vista. During that, I noticed some malfunctions on my system (it was running too slow, Windows Explorer crashed). When Explorer crashed, file transportation stopped. I was afraid, but I tried to access my files and it seemed to be working. I tried to open a movie (from the external disk) but it couldn't load. I thought of restarting, but this took sooo long... So I unplugged the hard disk and at that time it managed to shut down. I logged on to Windows Vista but the hard disk couldn't be mounted. I plugged it but nothing happened. I unplugged it and I heard this specific sound that notifies that something has been unplugged. I thought of logging to Ubuntu 10.04 and see what I can do. I plugged the hard disk, but I couldn't see it. I opened GParted but I couldn't see it either. I tried with Disc Utility and there it was! I tried to mount it but a got an error message stating that an error occured with Windows, there is a file (0,0) that has problem or something like that. It suggested to log into Windows and run chkdsk /f and reboot twice. The thing is that I am somehow afraid to do so because I don't really know the impact on that. Plus I don't trust doing even a check on Vista... I finally risked it and I typed chkdsk/f on a cmd. I cannot, however, actually run it because I don't have admin privileges. So from search I found chkdsk, I right cliked and selected “run as administrator”. It run but I got a message like NTFS file system. It should check at the coming restart. At that point I am mistaken. I thought that f meant F but this is not the case here... Does anyone have any suggestions and advice?

    Read the article

  • The type of programmer I want to be [closed]

    - by Aventinus_
    I'm an undergraduate Software Engineer student, although I've decided that pure programming is what I want to do for the rest of my life. The thing is that programming is a vast field and although most of its aspects are extremely interesting, soon or later I'll have to choose one (?) to focus on. I have several ideas on small projects I'd like to develop this summer, having in mind that this will gain me some experience and, in the best scenario, some cash. But the most important reason I'd like to develop something close to “professional” is to give myself direction on what I want to do as a programmer. One path is that of the Web Programmer. I enjoy PHP and MySQL, as well as HTML and CSS, although I don't really like ASP.NET. I can see myself writing web apps, using the above technologies, as well as XML and Javascript. I also have a neat idea on a Facebook app. The other path is that of the Desktop Programmer. This is a little more complicated cause I really-really enjoy high level languages such as Java and Python but not the low level ones, such as C. I use both Linux and Windows for the last 6 years and I like their latest DEs (meaning Gnome Shell and Metro). I can see myself writing desktop applications for both OSs as long as it means high level programming. Ideally I'd like being able to help the development of GNOME. The last path that interests me is the path of the Smartphone Programmer. I have created some sample applications on Android and due to Java I found it a quite interesting experience. I can also see myself as an independent smartphone developer. These 3 paths seem equally interesting at the moment due to the shallowness of my experience, I guess. I know that I should spend time with all of them and then choose the right one for me but I'd like to know what are the pros and cons in terms of learning curve, fun, job finding and of course financial rewards with each of these paths. I have fair or basic understanding of the languages/technologies I described earlier and this question will help me choose where to focus, at least for now.

    Read the article

  • How to have an Arduino wait until it receives data over serial?

    - by SonicDH
    So I've wired up a little robot with a sound shield and some sensors. I'm trying to write a sketch that will let check the sensors. What I'd like for it to do is print out a little menu over serial, wait until the user sends a selection, jump to the function that matches their selection, then (once the function is done) jump back and print the menu again. Here's what I've written, but I'm not a that good of a coder, so it doesn't work. Where am I going wrong? #include <Servo.h> Servo steering; Servo throttle; int pos = 0; int val = 0; void setup(){   Serial.begin(9600);   throttle.write(90);   steering.write(90);   pinMode(A0, INPUT);   pinMode(7, INPUT);   char ch = 0; } void loop(){   Serial.println("Menu");   Serial.println("--------------------");   Serial.println("1. Motion Readout");   Serial.println("2. Distance Readout");   Serial.println("3. SD Directory Listing");   Serial.println("4. Sound Test");   Serial.println("5. Car Test");   Serial.println("--------------------");   Serial.println("Type the number and press enter");   while(char ch = 0){   ch = Serial.read();}   char ch;   switch(ch)   {     case '1':     motion();   }    ch = 0; } //menu over, lets get to work. void motion(){   Serial.println("Haha, it works!"); } I'm pretty sure a While loop is the right thing to do, but I'm probably implementing it wrong. Can anyone shed some light on this?

    Read the article

  • Should i continue my self-taught coding practice or learn how to do coding professionally?

    - by G1i1ch
    Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Object Oriented Design of a Small Java Game

    - by user2733436
    This is the problem i am dealing with. I have to make a simple game of NIM. I am learning java using a book so far i have only coded programs that deal with 2 classes. This program would have about 4 classes i guess including the main class. My problem is i am having a difficult time designing classes how they will interact with each other. I really want to think and use a object oriented approach. So the first thing i did was design the Pile CLASS as it seemed the easiest and made the most sense to me in terms of what methods go in it. Here is what i have got down for the Pile Class so far. package Nim; import java.util.Random; public class Pile { private int initialSize; public Pile(){ } Random rand = new Random(); public void setPile(){ initialSize = (rand.nextInt(100-10)+10); } public void reducePile(int x){ initialSize = initialSize - x; } public int getPile(){ return initialSize; } public boolean hasStick(){ if(initialSize>0){ return true; } else { return false; } } } Now i need help in designing the Player Class. By that i mean i am not asking for anyone to write code for me as that defeats the purpose of learning i was just wondering how would i design the player class and what would go on it. My guess is that the player class would contain method for choosing move for computer and also receiving the move human user makes. Lastly i am guessing in the Game class i am guessing the turns would be handeled. I am really lost right now so i was wondering if someone can help me think through this problem it would be great. Starting with the player class would be appreciated. I know there are some solutions for this problem online but i refuse to look at because i want to develop my own approach to such problems and i am confident if i can get through this problem i can solve other problems. I apologize if this question is a bit poor but in specific i need help in designing the Player class.

    Read the article

  • Why doesn't Ubuntu detect my second hard drive?

    - by user93179
    I am new to Linux and to Ubuntu, I was wondering, I have two hard drives setup in SATA ports (non-raid, at least I don't think they are). I installed ubuntu unto the drives fresh without any previous versions or windows at all. However when I got the Ubuntu 12.04 LTS working, all I see is 1 x 120 gigabyte harddrive. Also, not sure if this is important or not, my hard drives are SSD. My computer specs are Asus P9Z77-V-LK Nvidia Geforce GTX 660 TI Intel i5 3570k 3.4 /proc/partitions shows: major minor #blocks name 8 0 117220824 sda 8 1 117219328 sda1 8 16 117220824 sdb 8 17 96256 sdb1 8 18 108780544 sdb2 8 19 8342528 sdb3 11 0 1048575 sr0 and ls -l /sys/block/ | grep -v /virtual/: lrwxrwxrwx 1 root root 0 Sep 27 17:26 sda - ../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda lrwxrwxrwx 1 root root 0 Sep 27 17:26 sdb - ../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sdb lrwxrwxrwx 1 root root 0 Sep 27 22:26 sdc - ../devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.1/1-1.1:1.0/host6/target6:0:0/6:0:0:0/block/sdc lrwxrwxrwx 1 root root 0 Sep 27 22:04 sr0 - ../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0/block/sr0 sudo file -s /dev/sd*: /dev/sda: x86 boot sector; partition 1: ID=0x7, starthead 32, startsector 2048, 234438656 sectors, code offset 0xc0, OEM-ID " ?", Bytes/sector 190, sectors/cluster 124, reserved sectors 191, FATs 6, root entries 185, sectors 64514 (volumes 32 MB) , physical drive 0x7e, dos 32 MB) , FAT (32 bit), sectors/FAT 749, reserved3 0x800000, serial number 0x35361a2b, unlabeled /dev/sdb2: Linux rev 1.0 ext4 filesystem data, UUID=387761ac-5eba-4d0f-93ba-746a82fb541d (needs journal recovery) (extents) (large files) (huge files) /dev/sdb3: data /dev/sdc: x86 boot sector; partition 1: ID=0xc, active, starthead 0, startsector 8064, 30473088 sectors, code offset 0xc0 /dev/sdc1: x86 boot sector, code offset 0x58, OEM-ID "SYSLINUX", sectors/cluster 64, reserved sectors 944, Media descriptor 0xf8, heads 128, hidden sectors 8064, sectors 30473088 (volumes 32 MB) , FAT (32 bit), sectors/FAT 3720, Backup boot sector 8, serial number 0xf90c12e9, label: "KINGSTON " /dev/sda1: x86 boot sector, code offset 0x52, OEM-ID "NTFS ", sectors/cluster 8, reserved sectors 0, Media descriptor 0xf8, heads 255, hidden sectors 2048, dos 32 MB) , FAT (32 bit), sectors/FAT 749, reserved3 0x800000, serial number 0x35361a2b, unlabeled Any help would be greatly appreciated! Thanks Another thing I noticed is, when i use gparted to locate my drives, it seems that sda1 is my second drive that I am not detecting when I boot up and my ubuntu + FAT Boot files are installed in sdb1

    Read the article

  • RPi and Java Embedded GPIO: It all begins with hardware

    - by hinkmond
    So, you want to connect low-level peripherals (like blinky-blinky LEDs) to your Raspberry Pi and use Java Embedded technology to program it, do you? You sick foolish masochist. No, just kidding! That's awesome! You've come to the right place. I'll step you though it. And, as with many embedded projects, it all begins with hardware. So, the first thing to do is to get acquainted with the GPIO header on your RPi board. A "header" just means a thingy with a bunch of pins sticking up from it where you can connect wires. See the the red box outline in the photo. Now, there are many ways to connect to that header outlined by the red box in the photo (which the RPi folks call the P1 header). One way is to use a breakout kit like the one at Adafruit. But, we'll just use jumper wires in this example. So, to connect jumper wires to the header you need a map of where to connect which wire. That's why you need to study the pinout in the photo. That's your map for connecting wires. But, as with many things in life, it's not all that simple. RPi folks have made things a little tricky. There are two revisions of the P1 header pinout. One for older boards (RPi boards made before Sep 2012), which is called Revision 1. And, one for those fancy 512MB boards that were shipped after Sep 2012, which is called Revision 2. So, first make sure which board you have: either you have the Model A or B with 128MB or 256MB built before Sep 2012 and you need to look at the pinout for Rev. 1, or you have the Model B with 512MB and need to look at Rev. 2. That's all you need for now. More to come... Hinkmond

    Read the article

  • 2d, Top-down map with different levels

    - by Ktash
    So, I'm creating a 2d, top down, sprite based (tiled) game, and right now I'm working on maps (well, a map editor at the moment, but it will be creating my maps, so basically the same thing). The scenario So, I'm thinking about efficiency and creating a map in pieces. In each piece, I plan on having 'layers'. Basically, I plan on rendering it down to a 'below hero' level, and an 'above hero' level, with the hero rendered in between obviously. There will likely also be a 'on level with hero' layer, but I'm not quite there yet. Not even worrying about events or interaction yet. Just looking to get a hero on the screen. Now for movement, I obviously need to know what tiles can be moved and in what direction. My plan at the moment is each tile getting 8 bits (4 'can enter in direction' bits, 4 'can leave in direction'). This will allow me to limit movement and even allow one way directional movement. The dilemma This works great for a lot of scenarios. It will allow me to store a map in essentially 3 layers, a string, and gives me flexibility going forward. However, I can't create maps that themselves have layers. A good example is a bridge where the user can go under or over the bridge without invalid moves being allowed. I can't create a platform and allow movement underneath. These are things I would like to be able to include in my game. My idea In theory, I could allow multiple hero layers and then allow multiple sets of 'below' and 'above' layers (or sandwich layers). But this complicates my system, and makes movement between maps potentially tricky (If the hero is on the third layer at the edge of a map, but that corresponds to the second layer on the other map, how can I allow or disallow movement). My question Is there a better way to manage multiple maps with multiple levels like this where a users level may be 'connected' on different levels on different maps? Or even... Am I doing this the hard way? Is there a more standard way to handle top-down 2d tiled maps that I am just not aware of? Things to note or that might be helpful This will be done in Javascript (transferred around in JSON) State will need to be transferred quickly, so a map-id and x/y/direction should be enough to get me a boolean 'can move' value Maps will not be standard sized (though they will be in a certain number of tiles) Making an editor tool so that I can have others help, so something that I can create in a tool would be helpful 'Teleportation' locations will likely need to exist to get into building maps and to and from different map sets (which will not necessarily be connected), but have not been created yet (lumping in with events at the moment).

    Read the article

  • what would be a good way to implement/render a 2d tiled map for a browser game?

    - by jj_
    I've made this little rpg ruby game I did while learning and now I'd like to make it into a browser game. I've already set up Sinatra framework to serve it, so what I am looking for, before everything else, is a way to represent the game map in browser (location attributes are stored in db). A new map is randomly generated by code for each new game at each game start. For now forget db, and let's say a map (say 100x100 "squares") is stored as a tridimensional array. (x,y, ...) Last "dimension" of array stores who & what is at that map cell: a player, a building, whatever. So all I have to do is render those "squares" or array cells to a 2d tiled map in the browser. The map does not need to refresh or be dynamically fetched as you scroll it, (at least at this stage of development) but, a technology which would allow me to do so in future would be a good reason for choosing it. Things that I thought of: html tables, html5 canvas, some js framework which is designed exactly with this purpose (which I do not know of = please advice). Yes I know about gamequery-js framework, but I've never used it, and I don't know if it's going to slow down everything down to inusability as I'm adding new features (scrolling, ajax). I really don't know of any other alternatives.. maybe there are lighter approaches? Easier or more minimalistic ways ? More targeted js framework which is the right tool for the job? Maybe just some html canvas code, or even simple image maps, or images with absolute positioning will be enough? The thing is I'd like to start simple, and then gradually make it better, so, as I said before, I'd prefer something that will give me room for improvement or is headed toward new web tendencies but which will also give me a bit of gratification in the beginning :) So.. advices are needed! And appreciated! :) Thanks p.s. Flash is excluded because I don't like it.

    Read the article

  • git workflow for separating commits

    - by gman
    Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user "You need to upgrade". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question Are there workflows that work for you or that make this process easier? I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says development methodologies and processes

    Read the article

  • Imperative vs. component based programming [closed]

    - by AlexW
    I've been thinking about how programming and more specifically the teaching of programming is advocated amongst the community (online). Often I've heard that Ruby and RoR is an ideal platform for learning to program. I completely disagree... RoR and Ruby are based on the application of the component based paradigm, which means they are ideal for rapid application development. This is much like the MVC model in PHP and ASP.NET But, learning a proper imperative language like Java or C/C++ (or even Perl and PHP) is the only way for a new programmer to explore logic itself, and not get too bogged down in architectural concerns like the need for separation of concerns, and the preference for components. Maybe it's a personal preference thing. I rather think that the most interesting aspects to programming are the procedural bits of code I write that actually do stuff rather than the project planning, and modelling that comes about from fully object oriented engineering or simply using the MVC model. I know this may sound confused to some of you. I feel strongly though that the best way for programming to be taught is through imperative and procedural methods. Architectural (component) methods come later, if at all. After all, none of the amazing algorithms that exist were based on OOP practice! It's all procedural code when it comes to the 'magic'. OOP is useful in creating products and utilities. Algorithms are what makes things happen, and move data around, and so imperative (and/or procedural) code are what matters most. When I see programmers recommending Ruby on Rails to newbie developers, I think it's just so wrong. Just because you write less code with Ruby does not make it easier to do! It's the opposite... you have to know loads more to appreciate its succinct nature. New coders who really want to understand the nuts and bolts of coding need to go away and figure out writing methods/functions (i.e. imperative programming) and working in procedural style, in order to grasp the fundamentals, first, before looking into architectural ways of working. So, my question is: should Ruby ever be recommended as a first language? I think no (obviously)... what arguments are there for it?

    Read the article

  • Finding Tools Guidance in OUM

    - by user716869
    OUM is not tool – specific. However, it does include tool guidance.  Tool guidance in OUM includes: a mention of a tool that could be used to complete a specific task(s) templates created with a specific tool example work products in a specific tool links to tool resources Tool Supplemental Guides So how do you find all this helpful tool information? Start at the lowest level first – the Task Overview.  Even though the task overviews are written tool-agnostic, they sometimes mention suggestions, or examples of a tool that might be used to complete the task.  More specific tool information can be found in the Task Overview, Templates and Tools section.  In some cases, the tool used to create the template (for example, Microsoft Word, Powerpoint, Project and Visio) is useful. The Templates and Tools section also provides more specific tool guidance, such as links to: White Papers Viewlets Example Work Products Additional Resources Tool Supplemental Guides If you’re more interested in seeing what tools might be helpful in general for your project or to see if there is any tool guidance for a specific tool that your project is committed to using, go to the Supplemental Guidance page in OUM.  This page is available from the Method Navigation pull down located in the header of almost every OUM page. When you open the Supplemental Guidance page, the first thing you see is a table index of everything that is included on the page.  At the top of the right column are all the Tool Supplemental Guides available in OUM.  Use the index to navigate to any of the guides. Next in the right column is Discipline/Industry/View Resources and Samples.  Use the index to navigate to any of these topics and see what’s available and more specifically, if there is any tool guidance available.  For example, if you navigate to the Cloud Resources, you will find a link to the IT Strategies from Oracle page that provides information for Cloud Practitioner Guides, Cloud Reference Architectures and Cloud White Papers, including the Cloud Candidate Selection Tool and Cloud Computing Maturity Model. The section for Method Tool and Technique Cross References can take you to the Task to Tool Cross Reference.  This page provides a task listing with possible helpful tools and links to more information regarding the tools.  By no means is this tool guidance all inclusive.  You can use other tools not mentioned in OUM to complete an OUM task. The Method Tool and Technique Cross References can also take you to the various Technique pages (Index and Cross References).  While techniques are not necessarily “tools,” they can certainly provide valuable assistance in completing tasks. In the Other Resources section of the Supplemental Guidance page, you find links to the viewlets and white papers that are included within OUM.

    Read the article

  • Is it dangerous for me to give some of my Model classes Control-like methods?

    - by Pureferret
    In my personal project I have tried to stick to MVC, but I've also been made aware that sticking to MVC too tightly can be a bad thing as it makes writing awkward and forces the flow of the program in odd ways (i.e. some simple functions can be performed by something that normally wouldn't, and avoid MVC related overheads). So I'm beginning to feel justified in this compromise: I have some 'manager programs' that 'own' data and have some way to manipulate it, as such I think they'd count as both part of the model, and part of the control, and to me this feels more natural than keepingthem separate. For instance: One of my Managers is the PlayerCharacterManager that has these methods: void buySkill(PlayerCharacter playerCharacter, Skill skill); void changeName(); void changeRole(); void restatCharacter(); void addCharacterToGame(); void createNewCharacter(); PlayerCharacter getPlayerCharacter(); List<PlayerCharacter> getPlayersCharacter(Player player); List<PlayerCharacter> getAllCharacters(); I hope the mothod names are transparent enough that they don't all need explaining. I've called it a manager because it will help manage all of the PlayerCharacter 'model' objects the code creates, and create and keep a map of these. I may also get it to store other information in the future. I plan to have another two similar classes for this sort of control, but I will orchestrate when and how this happens, and what to do with the returned data via a pure controller class. This splitting up control between informed managers and the controller, as opposed to operating just through a controller seems like it will simplify my code and make it flow more. My question is, is this a dangerous choice, in terms of making the code harder to follow/test/fix? Is this somethign established as good or bad or neutral? I oculdn't find anything similar except the idea of Actors but that's not quite why I'm trying to do. Edit: Perhaps an example is needed; I'm using the Controller to update the view and access the data, so when I click the 'Add new character to a player button' it'll call methods in the controller that then go and tell the PlayerCharacterManager class to create a new character instance, it'll call the PlayerManager class to add that new character to the player-character map, and then it'll add this information to the database, and tell the view to update any GUIs effected. That is the sort of 'control sequence' I'm hoping to create with these manager classes.

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >