Search Results

Search found 20155 results on 807 pages for 'things'.

Page 17/807 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • In Cocoa (or maybe GUI development in general) how do you specify an arbitrary number of things tile

    - by RankWeis
    I'm new to creating GUI's, everything I've done up until this point is using the command line. I'm trying to create a port of minesweeper to the macintosh, as an experiment, and I've got the CLI working, but I'm running into walls everywhere with the gui. The first thing it seems I have to do, however, is be able to tile n x m 'boxes' for grid - and I'm not sure how to do that. The information is ready to be handed to it, but I don't know where to do it, or how. Also, if anyone has any recommendations for sites/Cocoa development books, feel free to drop them in here... Thanks!

    Read the article

  • How to efficiently render different things at different times in a game?

    - by Baqjohanson
    Sorry for the ambiguous title. What I am wondering is what is an efficient way to alternate rendering between lets say a main menu, options menu, and "in the game." The only two ways I've come up with so far are to have 1 render function, with code for each part (menu, ...) and a variable to control what gets drawn, or to have multiple render functions, and use a function pointer to point to the appropriate one, and then just call the function pointer. I always wonder how more professional games do it.

    Read the article

  • MVP Summit 2011 summary and thoughts: The &ldquo;I hope I don&rsquo;t cross a line and lose my MVP status&rdquo; post

    - by George Clingerman
    I've been wanting to write this post summarizing my thoughts about the MVP summit but have been dragging my feet since it's a very difficult one to write. However seeing Andy (http://forums.create.msdn.com/forums/t/77625.aspx) and Catalin (http://www.catalinzima.com/2011/03/mvp-summit-2011/) and Chris (http://geekswithblogs.net/cwilliams/archive/2011/03/07/144229.aspx) post about it has encouraged me to finally take the plunge. I'm going to have to write carefully though because I'm going to be dancing around a ton of NDA mine fields as well as having to walk the tight-rope of not sending the wrong message or having people read too much into what I'm saying. I want to note that most of what I'm about to say is just based on my observations, they're not thoughts that Microsoft has asked me to pass along and they're not things I heard Microsoft say. It's just me sharing what I think after going to the MVP summit. Let's start off with a short imaginary question and answer session.     Has the App Hub forums and XBLIG management been rather poor by Microsoft? Yes.     Do I think we're going to see changes to that overnight? No.     Will it continue to look bad from the outside? Somewhat. Confusing right? Well that's kind of how things are right now. Lots of confusion. XNA is doing AWESOME. Like, really, really awesome. As a result of that awesomeness, XNA is on three major platforms: Xbox 360, WP7 and PC. This means that internally Microsoft is really excited and invested in the technology. That's fantastic for XNA and really should show you the future the framework has. It's here to stay. So why are Xbox LIVE Indie Game developers feeling so much pain? The ironic thing is that pain is being caused by the success of XNA. When XNA was just a small thing, there was more freedom and more focus. It was just us and them. We were an only child. Now our family has grown and everyone has and wants some time with XNA. This gets XNA pulled in all directions and as it moves onto new platforms, it plays catch up trying to get those platforms up to speed to where Xbox LIVE Indie Games has grown. Forums, documentation, educational content. They all need to be there because Xbox LIVE Indie Games has all of that and more. Along with the catch up in features/documentation/awesomeness there's the catch up that the people on the team have to play. New platforms and new areas of development mean new players and those new guys don't have the history of being around from the beginning. This leads to a lack of understanding at times just how important some things are because they seem so small and insignificant (Rich Text defaulting for new forum profiles would be one things that jumps to mind). If you're not aware that the forums have become more than just a basic Q&A, if you're not aware that they're a central hub to a very active community, then you don't understand why that small change should be prioritized over something else. New people have to get caught up and figure out how to make a framework and central forum site work for everyone it's now serving. So yeah, a lot of our pain this last year has been simply that XNA is doing well and XBLIG is doing well so the focus was shifted to catch other things up. It hurts when a parent seems to not have any time for you and they're spending some much time with your new baby brother. Growing pains. All families and in our case our product family experience it to some degree. I think as WP7 matures we'll see the team figuring out how to give everyone the right amount of attention. While we're talking about some of our growing pains, it is also important to note (although not really an excuse) that the Xbox LIVE Arcade developers complain about many of the same things that we do. If you paid attention to talks and information coming out of GDC 2011, most of the the XBLA guys were saying things that sounded eerily similar to what the XBLIG developers are saying (Scott Nichols from GayGamer.net noticed http://twitter.com/#!/NaviFairyGG/status/43540379206811650). Does this mean we should just accept the status quo since we're being treated exactly the same? No way. However it DOES show that the way we're being treated is no indication of the stability and future of the platform, it's just Microsoft dropping the communication ball on two playing fields. We're not alone and we're not even being treated worse. Not great, but also in a weird way a very good sign. Now on to a few tidbits I think I CAN share from the summit (I'm really crossing my fingers I'm not stepping over some NDA line I shouldn't be). First, I discovered that the XBLIG user base is bigger than I personally had originally estimated. I won't give the exact numbers (although we did beg Microsoft to release some of these numbers so maybe someday?) but it was much larger than my original guestimates and I was pleasantly surprised. Maybe some of you guys had the right number when you were guessing, but I know that mine was much too low. And even MORE importantly the number of users/shoppers is growing at a steady pace as well. Our market is growing! That was fantastic news and really something that I had to share. On to the community manager discussion. It was mentioned. I was mentioned. I blushed. Nothing more to report there than the blush in my cheeks was a light crimson color. If I ever see a job description posted for that position I have a resume waiting in the wings. I can't deny that I think that would be my dream job... ...so after I finished blushing, the MVPs did make it very, very clear that the communication has to improve. Community manager or not the single biggest pain point with the Xbox LIVE Indie Game community has been a lack of communication. I have seen dramatic improvement in the team responding to MVPs and I'm even seeing more communication from them on the forums so I'm hoping that's a long term change. I really think they understood the issue, the problem remains how to open that communication channel in a way that was sustainable. I think they'll get it figured out and hopefully that's sooner rather than later. During the summit, you may have seen me tweeting about how I was "that guy" (http://twitter.com/#!/clingermangw/status/42740432471470081). You also may have noticed that Andy and Catalin both mentioned me in their summit write ups. I may have come on a bit strong while I was there...went a little out of character for myself. I've been agitated for a while with the way things have been and I've been listening to you guys and hearing you guys be agitated. I'm also watching some really awesome indie game developers looking elsewhere and leaving the platform. Some of them we might not have been able to keep even with changes, but others are only leaving because of perceptions and lack of communication from Microsoft. And that pisses me off. And I let Microsoft know that I was pissed off. You made your list and I took that list and verbalized it. I verbalized the hell out of it. [It was actually mentioned that I'm a lot nicer on the forums and in email than I am in person...I felt bad about that, but I couldn't stay silent]. Hopefully it did something guys, I really did try hard to get the message across. Along with my agitation, I also brought some pride. I mentioned several things in person to the team that I was particularly proud of. From people in the community that are doing an awesome job, to the re-launch of XboxIndies that was going on that week and even gamers like Steven Hurdle (http://writingsofmassdeduction.com/) who have purchased one XBLIG every day for over 100 days now. The community is freaking rocking it and I made sure to highlight that. So in conclusion, I'd just like to say hang in there (you know, like that picture of the cat). If you've been worried about investing in Xbox LIVE Indie Games because you think it's on shaky ground. It's not. Dream Build Play being about the Xbox 360 should have helped a little to point that out. The team is really scrambling around trying to figure things out and make improvements all around. There’s quite a few new gals and guys and it's going to take them time to catch up and there are a lot of constantly shifting priorities. We all have one toy, one team and we're fighting for time with it. It's also time for the community to continue spreading our wings and going out on our own more often. The Indie Game Winter Uprising was a fantastic example of that. We took things into our own hands and it got noticed and Microsoft got behind it. They do every time we stand up and do something (look at how many Microsoft employees tweeted, wrote about the re-launch of XboxIndies.com or the support I've gotten from them for my weekly XNA Notes). XNA is here to stay, it's time for us to stop being scared of that and figure out how to make our own games the successes they should be. There's definitely a list of things that need to be fixed, things that should be improved and I think we should definitely keep vocal about that with Microsoft. Keep it short, focused and prioritized. There's also a lot of things we can do ourselves while we're waiting on them to fix and change things. Lots of ways we can compensate for particular weaknesses in the channel. The kind of stuff that we can step up and do ourselves. Do it on our own, you know, the way Indies always do. And I'm really looking forward to watching us do just that.

    Read the article

  • Mailman Error / Cpanel

    - by Faith In Unseen Things
    Mailman is giving off this error when any changes are made to the list: ======================= Bug in Mailman version 2.1.14 We're sorry, we hit a bug! Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs. Ran: /usr/local/cpanel/bin/mailman-install --force Then it says at the end: Updating Usenet watermarks - nothing to update here Nothing to do. updating old qfiles cp: cannot remove `/usr/local/cpanel/img-sys/gnu-head-tiny.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mailman-large.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mailman.jpg': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/mm-icon.png': Permission denied cp: cannot remove `/usr/local/cpanel/img-sys/PythonPowered.png': Permission denied directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ca (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/uk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/it (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/es (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/en (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/da (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/eu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/no (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sv (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/tr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/zh_CN (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/nl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/fi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ast (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/zh_TW (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ko (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ro (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ja (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ru (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ia (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/gl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/vi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/lt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/cs (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/pt_BR (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/he (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/hr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/ar (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/et (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/de (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/fr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/hu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/templates/sr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel/caches (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/.cpanel/caches/config (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ca (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/uk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/it (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/es (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/da (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/eu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/no (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sv (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/tr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/zh_CN (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/nl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/fi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ast (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/zh_TW (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ko (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sk (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ro (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ja (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ru (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ia (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/gl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/vi (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/lt (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/cs (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sl (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/pt_BR (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/he (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/hr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/ar (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/et (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/de (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/fr (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/hu (fixing) directory permissions must be 02775: /usr/local/cpanel/3rdparty/mailman/messages/sr (fixing) Warning: Private archive directory is other-executable (o+x). This could allow other users on your system to read private archives. If you're on a shared multiuser system, you should consult the installation manual on how to fix this. Problems found: 76 Re-run as mailman (or root) with -f flag to fix must be privileged to use -u must be privileged to use -u Unable to touch file /var/cpanel/mailman2: Permission denied at /usr/local/cpanel/bin/mailman-install line 275. [2012-04-13 19:33:55 -0500] warn [restartsrv_mailman] 2254: Unable to set RLIMIT_RSS to infinity 2254: Unable to set RLIMIT_AS to infinity at /usr/local/cpanel/Cpanel/Rlimit.pm line 113 Cpanel::Rlimit::set_rlimit_to_infinity() called at /usr/local/cpanel/scripts/restartsrv_mailman line 18 eval {...} called at /usr/local/cpanel/scripts/restartsrv_mailman line 15 warn [restartsrv_mailman] 2254: Unable to set RLIMIT_RSS to infinity 2254: Unable to set RLIMIT_AS to infinity [2012-04-13 19:33:55 -0500] warn [Cpanel::SafeDir::MK] mkdir /var/run/restartsrv/startup failed: Permission denied at /usr/local/cpanel/Cpanel/SafeDir/MK.pm line 153 Cpanel::SafeDir::MK::safemkdir('/var/run/restartsrv/startup', 0700) called at /usr/local/cpanel/Cpanel/RestartSrv.pm line 756 Cpanel::RestartSrv::logged_startup('mailman', 1, ARRAY(0x8fa4bc8)) called at /usr/local/cpanel/scripts/restartsrv_mailman line 47 warn [Cpanel::SafeDir::MK] mkdir /var/run/restartsrv/startup failed: Permission denied [2012-04-13 19:33:55 -0500] warn [restartsrv_mailman] Failed to create /var/run/restartsrv/startup at /usr/local/cpanel/Cpanel/RestartSrv.pm line 759 Cpanel::RestartSrv::logged_startup('mailman', 1, ARRAY(0x8fa4bc8)) called at /usr/local/cpanel/scripts/restartsrv_mailman line 47 warn [restartsrv_mailman] Failed to create /var/run/restartsrv/startup Ran this: /usr/local/cpanel/bin/mailman-install --force -f Same message as above. root@server2 [~]# cat /usr/local/cpanel/3rdparty/mailman/logs/error Apr 13 19:33:55 2012 mailmanctl(2255): PID unreadable in: /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid Apr 13 19:33:55 2012 mailmanctl(2255): [Errno 2] No such file or directory: '/usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid' Apr 13 19:33:55 2012 mailmanctl(2255): Is qrunner even running? root@server2 [~]# /scripts/restartsrv_mailman --status mailmanctl (/usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/mailmanctl --stale-lock-cleanup start) running as mailman with PID 24070 3rdparty/mailman/bin/qrunner (/usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s) running as mailman with PID 24078 root@server2 [~]# cat /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid 24070 root@server2 [~]# ls -lah /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid -rw-rw---- 1 mailman mailman 6 Apr 13 19:47 /usr/local/cpanel/3rdparty/mailman/data/master-qrunner.pid root@server2 [~]# ps aux | grep python mailman 4557 0.0 0.1 10484 6044 ? S 19:40 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s mailman 24070 0.0 0.1 14268 4480 ? Ss 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/mailmanctl --stale-lock-cleanup start mailman 24071 1.1 0.1 14052 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=ArchRunner:0:1 -s mailman 24072 1.0 0.1 13444 6112 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=BounceRunner:0:1 -s mailman 24073 1.0 0.1 13040 6108 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=CommandRunner:0:1 -s mailman 24074 1.0 0.1 13484 6104 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=IncomingRunner:0:1 -s mailman 24075 1.0 0.1 12940 6136 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=NewsRunner:0:1 -s mailman 24076 1.0 0.1 13700 6172 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=OutgoingRunner:0:1 -s mailman 24077 1.0 0.1 13416 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=VirginRunner:0:1 -s mailman 24078 0.9 0.1 13944 6100 ? S 19:47 0:00 /usr/local/bin/python2.4 /usr/local/cpanel/3rdparty/mailman/bin/qrunner --runner=RetryRunner:0:1 -s root 24177 0.0 0.0 5428 756 pts/0 S+ 19:48 0:00 grep python

    Read the article

  • Should I always encapsulate an internal data structure entirely?

    - by Prog
    Please consider this class: class ClassA{ private Thing[] things; // stores data // stuff omitted public Thing[] getThings(){ return things; } } This class exposes the array it uses to store data, to any client code interested. I did this in an app I'm working on. I had a ChordProgression class that stores a sequence of Chords (and does some other things). It had a Chord[] getChords() method that returned the array of chords. When the data structure had to change (from an array to an ArrayList), all client code broke. This made me think - maybe the following approach is better: class ClassA{ private Thing[] things; // stores data // stuff omitted public Thing[] getThing(int index){ return things[index]; } public int getDataSize(){ return things.length; } public void setThing(int index, Thing thing){ things[index] = thing; } } Instead of exposing the data structure itself, all of the operations offered by the data structure are now offered directly by the class enclosing it, using public methods that delegate to the data structure. When the data structure changes, only these methods have to change - but after they do, all client code still works. Note that collections more complex than arrays might require the enclosing class to implement even more than three methods just to access the internal data structure. Is this approach common? What do you think of this? What downsides does it have other? Is it reasonable to have the enclosing class implement at least three public methods just to delegate to the inner data structure?

    Read the article

  • Is Running programs by address common?

    - by dgood1
    I have read some of the things posted here and I keep reading about people running stuff like /foldername/executable -cmd NAME (was reading about a programmer using Eclipse, so he was testing something he made) I don't see things like that when I run things here (Ubuntu 12.04) because of the launcher and the Ubuntu button at the very top. That and Eclipse indigo has a button for running and testing things it makes. Just asking how and why it's common? (assuming it's the Terminal[Ctrl+alt+T] but I'm not sure)

    Read the article

  • How to safely copy an object?

    - by Prog
    This question is going to be a little long. Please bear with me. Something that happened in a project of mine made me think about how to safely copy objects. I'll present the situation I had and then ask a question. There was a class SomeClass: class SomeClass{ Thing[] things; public SomeClass(Thing[] things){ this.things = things; } // irrelevant stuff omitted public SomeClass copy(){ return new SomeClass(things); } } There was another class Processor that takes SomeClass objects, copies them (via someClassInstance.copy()), manipulates the copy's state, and returns the copy. Here it is: class Processor{ public SomeClass processObject(SomeClass object){ SomeClass copy = object.copy(); manipulateTheCopy(copy); return copy; } // irrelevant stuff omitted } I ran this, and it had bugs. I looked into these bugs, and it turned out that the manipulations Processor does on copy actually affect not only the copy, but also the original SomeClass object that was passed into processObject. I found out that it was because the original and the copy shared state - because the original passed it's field things into the copy when creating it. This made me realize that copying objects is harder than simply instantiating them with the same fields as the original. For the two objects to be completely disconnected, without any shared state, each of the fields passed to the copy also has to be copied. And if that object contains other objects - they have to be copied too. And so on. So basically, in order to be able to actually copy an object, each class in the system must have a copy() method, that also invokes copy() on all of it's fields, and so on. So for example, for copy() in SomeClass to work, it needs to look like this: public SomeClass copy(){ Thing[] copyThings = new Thing[things.length]; for(int i=0; i<things.length; i++) copyThings[i] = things[i].copy(); return new SomeClass(copyThings); } And if Thing has object fields of it's own, than it's own copy() method must be appropriate: class Thing{ Apple apple; Pencil pencil; int number; public Thing(Apple apple, Pencil pencil, int number){ this.apple = apple; this.pencil = pencil; this.number = number; } public Thing copy(){ // 'number' is a primitve. return new Thing(apple.getCopy(), pencil.getCopy(), number); } } And so on. Of course, instead of all classes having a copy() method, the copying mechanism can happen in all of the getters and the constructors of classes (unless places where it isn't suitable, for example when the field points to an external object, not to an object that 'is part' of this object). Still, that means that in order to be able to safely copy an object - most classes would have to have copying mechanisms in their getters. My question is divided into two parts: How frequently do you need to get a copy of an object? Is this a regular issue? Is the technique described common and/or reasonable? Or is there a better way to make safe copies of objects? Or is there an easier way to safely copy objects, without them sharing any state?

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Ubuntu 12.04, Intel Centrino 1030 Wireless-N 1030 not able to connect to wireless

    - by playcat
    I saw several similar threads, but none really does it... I have 12.04 precise x64 installed and things worked great for some 10 days. Then it all stopped - I wasn't able to connect any more. Even more, things were so scre*ed up that it were showing other network's name while connecting :( I have an Asus K53SM-SX054. Here's the text from dmesg related to my card: http://fixee.org/paste/iaw23iw/ Basically, it's Intel Centrino Wireless-N 1030. Not sure if it's important, I had Gnome with unity and then moved to KDE. Any ideas what to do? I'm on the verge of reinstalling things, but I already set plenty of things and have some things that I'd need to back up... and I'm rather lazy to go on with it :) EDIT: My wlan network is hidden. Not sure if this should have any influence.

    Read the article

  • Blender 2.6: How to Merge the Pros of Meshes and Surfaces

    - by fridojet
    there are two interesting kinds of objects: Meshes and Surfaces. Each of them offers very cool features. Object Type Specific Features Nice Features of Surfaces: (for example) They're as scalable as vector graphics (really nice!) You can build winding things real simply. Nice Features of Meshes: (for example) You can build organic things really good using the Sculpt Mode and a graphic tablet. You can use some special things like Physics. My Question There are things for which Surfaces are better and things for which Meshes are better. But how can I use both the best features of Surfaces and the best features of Meshes on one object at once? For example: How can I use Physics (like on Meshes) on lossless scalable objects (like Surfaces)? Thanks.

    Read the article

  • Should if statments be in inner or outer method?

    - by mjcopple
    Which of these designs is better? What are the pros and cons of each? Which one would you use? Any other suggestions of how to deal with methods like is are appreciated. It is reasonable to assume that Draw() is the only place that the other draw methods are called from. This needs to expand to many more Draw* methods and Show* properties, not just the three shown here. public void Draw() { if (ShowAxis) { DrawAxis(); } if (ShowLegend) { DrawLegend(); } if (ShowPoints && Points.Count > 0) { DrawPoints(); } } private void DrawAxis() { // Draw things. } private void DrawLegend() { // Draw things. } private void DrawPoints() { // Draw things. } Or public void Draw() { DrawAxis(); DrawLegend(); DrawPoints(); } private void DrawAxis() { if (!ShowAxis) { return; } // Draw things. } private void DrawLegend() { if (!ShowLegend) { return; } // Draw things. } private void DrawPoints() { if (!ShowPoints || Points.Count <= 0)) { return; } // Draw things. }

    Read the article

  • SVG Animation in Python? - Any options other than Things! 0.4?

    - by ThantiK
    I recently gave my daughter some time on the computer, and she really likes a program called 'BabySmash!', and I also have another program that came with a keyboard attachment called 'Laugh, Smile & Learn' from playschool I think. I'm not a python guru by any stretch of the imagination and I kind of have it in my head that I want to create some sort of program in the spirit of these two, with vector animation and sounds. Thing is, it doesn't look like there are many options for vector animation in python. I found a library called Things! 0.4, but it's lacking in-depth documentation and seems to be more of an experiment in vector animation rather than a full blown solution. Are there any options that I'm not immediately finding? What other animation-libraries do you recommend? Even if the library isn't vector-based, ease of use is probably the strongest factor in my use.

    Read the article

  • Lead, Follow, or Get out of the way

    - by Daniel Moth
    This is one of the sayings (attributed to Thomas Paine) that totally resonated with me from the first time I heard it, which was only 3 years ago during some training course at work: "Lead, Follow, or Get out of the way" You'll find many books with this title and you'll find it quoted by politicians and other leaders in various countries at various times... the quote is open to interpretation and works on many levels. To set the tone of what this means to me, I'll use a simple micro example: In any given conversation, you are either leading it or following it, at different times/snapshots of the conversation. If you are not willing or able to lead it, and you are not willing or able to follow it, then you should depart. The bad alternative which this guidance encourages you NOT to do is to stick around and obstruct progress by not following, not leading, and simply complaining or trying to derail the discussion in no particular direction. The same pattern applies at your position/role at work. Either follow your management/leadership team, or try to lead them to what you think is a better place, or change jobs. Don't stick around complaining about the direction things are going, while not actively trying to either change things or make peace with it. In the previous paragraph you can replace the word "your management" with "the people reporting to you" and the guidance still holds. Either lead your direct reports to where you think they should go, or follow their lead, or change jobs. Complaining about folks not taking direction while doing nothing is not a maintainable state. To me this quote is not about a permanent state, it is not about some people always leading and some always following: It is about a role/hat that anybody can play/wear at any given moment. One minute I am leading you, the next I am following you, and the next we are both following someone else and so on... When there is disagreement, debate the different directions for as long as it takes for you to be comfortable that you can either follow or lead. If you don't become comfortable with either of those, get out of the way. Something to remember is that it is impossible to learn how to lead well, without learning how to follow well (probably deserves its own blog entry)... Things go wrong when someone thinks that they must always be leading, or when everybody wants to follow and nobody steps up to lead... Things go wrong when more than one person wants to lead and they don't try to reach agreement on a shared direction, stubbornly sticking to their guns pulling the rest of the team in multiple directions... Things go wrong when more than one person wants to lead and after numerous and lengthy discussions, none of them decides to follow or get out of the way... Things go wrong when people don't want to lead, don't want to follow, and insist on sticking around... While there are a few ways things that can go wrong as enumerated in the previous paragraph, the most common one in my experience is the last one I mentioned. You'll recognize these folks as the ones that always complain about everything that is wrong with their company/product but do nothing about it. Every time you hear someone giving feedback on how something is wrong or suboptimal, ask them "So now that you identified the problem, what do you think the solution is and what are you doing to drive us to that solution?" The next time things start going wrong, step up and remind everyone: Lead, Follow, or Get out of the way. For more perspectives, and for input to help you form your own interpretation, search the web for this phrase to see in what contexts it is being used (bing, google). Finally, regardless of your political views, I hope you can appreciate if only as an example this perspective of someone leading by actually getting out of the way. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Django aggregate query generating SQL error

    - by meepmeep
    I'm using Django 1.1.1 on a SQL Server 2005 db using the latest sqlserver_ado library. models.py includes: class Project(models.Model): name = models.CharField(max_length=50) class Thing(models.Model): project = models.ForeignKey(Project) reference = models.CharField(max_length=50) class ThingMonth(models.Model): thing = models.ForeignKey(Thing) timestamp = models.DateTimeField() ThingMonthValue = models.FloatField() class Meta: db_table = u'ThingMonthSummary' In a view, I have retrieved a queryset called 'things' which contains 25 Things: things = Thing.objects.select_related().filter(project=1).order_by('reference') I then want to do an aggregate query to get the average ThingMonthValue for the first 20 of those Things for a certain period, and the same value for the last 5. For the first 20 I do: averageThingMonthValue = ThingMonth.objects.filter(turbine__in=things[:20],timestamp__range="2009-01-01 00:00","2010-03-00:00")).aggregate(Avg('ThingMonthValue'))['ThingMonthValue__avg'] This works fine, and returns the desired value. For the last 5 I do: averageThingMonthValue = ThingMonth.objects.filter(turbine__in=things[20:],timestamp__range="2009-01-01 00:00","2010-03-00:00")).aggregate(Avg('ThingMonthValue'))['ThingMonthValue__avg'] But for this I get an SQL error: 'Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.' The SQL query being used by django reads: SELECT AVG([ThingMonthSummary].[ThingMonthValue]) AS [ThingMonthValue__avg] FROM [ThingMonthSummary] WHERE ([ThingMonthSummary].[thing_id] IN (SELECT _row_num, [id] FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY [AAAA].[id] ASC) as _row_num, [AAAA].[id] FROM ( SELECT U0.[id] FROM [Thing] U0 WHERE U0.[project_id] = 1 ) AS [AAAA]) as QQQ where 20 < _row_num) AND [ThingMonthSummary].[timestamp] BETWEEN '01/01/09 00:00:00' and '03/01/10 00:00:00') Any idea why it works for one slice of the Things and not the second? I've checked and the two slices do contain the desired Things correctly.

    Read the article

  • I Hereby Resolve… (T-SQL Tuesday #14)

    - by smisner
    It’s time for another T-SQL Tuesday, hosted this month by Jen McCown (blog|twitter), on the topic of resolutions. Specifically, “what techie resolutions have you been pondering, and why?” I like that word – pondering – because I ponder a lot. And while there are many things that I do already because of my job, there are many more things that I ponder about doing…if only I had the time. Then I ponder about making time, but then it’s back to work! In 2010, I was moderately more successful in making time for things that I ponder about than I had been in years past, and I hope to continue that trend in 2011. If Jen hadn’t settled on this topic, I could keep my ponderings to myself and no one would ever know the outcome, but she’s egged me on (and everyone else that chooses to participate)! So here goes… For me, having resolve to do something means that I wouldn’t be doing that something as part of my ordinary routine. It takes extra effort to make time for it. It’s not something that I do once and check off a list, but something that I need to commit to over a period of time. So with that in mind, I hereby resolve… To Learn Something New… One of the things I love about my job is that I get to do a lot of things outside of my ordinary routine. It’s a veritable smorgasbord of opportunity! So what more could I possibly add to that list of things to do? Well, the more I learn, the more I realize I have so much more to learn. It would be much easier to remain in ignorant bliss, but I was born to learn. Constantly. (And apparently to teach, too– my father will tell you that as a small child, I had the neighborhood kids gathered together to play school – in the summer. I’m sure they loved that – but they did it!) These are some of things that I want to dedicate some time to learning this year: Spatial data. I have a good understanding of how maps in Reporting Services works, and I can cobble together a simple T-SQL spatial query, but I know I’m only scratching the surface here. Rob Farley (blog|twitter) posted interesting examples of combining maps and PivotViewer, and I think there’s so many more creative possibilities. I’ve always felt that pictures (including charts and maps) really help people get their minds wrapped around data better, and because a lot of data has a geographic aspect to it, I believe developing some expertise here will be beneficial to my work. PivotViewer. Not only is PivotViewer combined with maps a useful way to visualize data, but it’s an interesting way to work with data. If you haven’t seen it yet, check out this interactive demonstration using Netflx OData feed. According to Rob Farley, learning how to work with PivotViewer isn’t trivial. Just the type of challenge I like! Security. You’ve heard of the accidental DBA? Well, I am the accidental security person – is there a word for that role? My eyes used to glaze over when having to study about security, or  when reading anything about it. Then I had a problem long ago that no one could figure out – not even the vendor’s tech support – until I rolled up my sleeves and painstakingly worked through the myriad of potential problems to resolve a very thorny security issue. I learned a lot in the process, and have been able to share what I’ve learned with a lot of people. But I’m not convinced their eyes weren’t glazing over, too. I don’t take it personally – it’s just a very dry topic! So in addition to deepening my understanding about security, I want to find a way to make the subject as it relates to SQL Server and business intelligence more accessible and less boring. Well, there’s actually a lot more that I could put on this list, and a lot more things I have plans to do this coming year, but I run the risk of overcommitting myself. And then I wouldn’t have time… To Have Fun! My name is Stacia and I’m a workaholic. When I love what I do, it’s difficult to separate out the work time from the fun time. But there are some things that I’ve been meaning to do that aren’t related to business intelligence for which I really need to develop some resolve. And they are techie resolutions, too, in a roundabout sort of way! Photography. When my husband and I went on an extended camping trip in 2009 to Yellowstone and the Grand Tetons, I had a nice little digital camera that took decent pictures. But then I saw the gorgeous cameras that other tourists were toting around and decided I needed one too. So I bought a Nikon D90 and have started to learn to use it, but I’m definitely still in the beginning stages. I traveled so much in 2010 and worked on two book projects that I didn’t have a lot of free time to devote to it. I was very inspired by Kimberly Tripp’s (blog|twitter) and Paul Randal’s (blog|twitter) photo-adventure in Alaska, though, and plan to spend some dedicated time with my camera this year. (And hopefully before I move to Alaska – nothing set in stone yet, but we hope to move to a remote location – with Internet access – later this year!) Astronomy. I have this cool telescope, but it suffers the same fate as my camera. I have been gone too much and busy with other things that I haven’t had time to work with it. I’ll figure out how it works, and then so much time passes by that I forget how to use it. I have this crazy idea that I can actually put the camera and the telescope together for astrophotography, but I think I need to start simple by learning how to use each component individually. As long as I’m living in Las Vegas, I know I’ll have clear skies for nighttime viewing, but when we move to Alaska, we’ll be living in a rain forest. I have no idea what my opportunities will be like there – except I know that when the sky is clear, it will be far more amazing than anything I can see in Vegas – even out in the desert - because I’ll be so far away from city light pollution. I’ve been contemplating putting together a blog on these topics as I learn. As many of my fellow bloggers in the SQL Server community know, sometimes the best way to learn something is to sit down and write about it. I’m just stumped by coming up with a clever name for the new blog, which I was thinking about inaugurating with my move to Alaska. Except that I don’t know when that will be exactly, so we’ll just have to wait and see which comes first!

    Read the article

  • Project Navigation and File Nesting in ASP.NET MVC Projects

    - by Rick Strahl
    More and more I’m finding myself getting lost in the files in some of my larger Web projects. There’s so much freaking content to deal with – HTML Views, several derived CSS pages, page level CSS, script libraries, application wide scripts and page specific script files etc. etc. Thankfully I use Resharper and the Ctrl-T Go to Anything which autocompletes you to any file, type, member rapidly. Awesome except when I forget – or when I’m not quite sure of the name of what I’m looking for. Project navigation is still important. Sometimes while working on a project I seem to have 30 or more files open and trying to locate another new file to open in the solution often ends up being a mental exercise – “where did I put that thing?” It’s those little hesitations that tend to get in the way of workflow frequently. To make things worse most NuGet packages for client side frameworks and scripts, dump stuff into folders that I generally don’t use. I’ve never been a fan of the ‘Content’ folder in MVC which is just an empty layer that doesn’t serve much of a purpose. It’s usually the first thing I nuke in every MVC project. To me the project root is where the actual content for a site goes – is there really a need to add another folder to force another path into every resource you use? It’s ugly and also inefficient as it adds additional bytes to every resource link you embed into a page. Alternatives I’ve been playing around with different folder layouts recently and found that moving my cheese around has actually made project navigation much easier. In this post I show a couple of things I’ve found useful and maybe you find some of these useful as well or at least get some ideas what can be changed to provide better project flow. The first thing I’ve been doing is add a root Code folder and putting all server code into that. I’m a big fan of treating the Web project root folder as my Web root folder so all content comes from the root without unneeded nesting like the Content folder. By moving all server code out of the root tree (except for Code) the root tree becomes a lot cleaner immediately as you remove Controllers, App_Start, Models etc. and move them underneath Code. Yes this adds another folder level for server code, but it leaves only code related things in one place that’s easier to jump back and forth in. Additionally I find myself doing a lot less with server side code these days, more with client side code so I want the server code separated from that. The root folder itself then serves as the root content folder. Specifically I have the Views folder below it, as well as the Css and Scripts folders which serve to hold only common libraries and global CSS and Scripts code. These days of building SPA style application, I also tend to have an App folder there where I keep my application specific JavaScript files, as well as HTML View templates for client SPA apps like Angular. Here’s an example of what this looks like in a relatively small project: The goal is to keep things that are related together, so I don’t end up jumping around so much in the solution to get to specific project items. The Code folder may irk some of you and hark back to the days of the App_Code folder in non Web-Application projects, but these days I find myself messing with a lot less server side code and much more with client side files – HTML, CSS and JavaScript. Generally I work on a single controller at a time – once that’s open it’s open that’s typically the only server code I work with regularily. Business logic lives in another project altogether, so other than the controller and maybe ViewModels there’s not a lot of code being accessed in the Code folder. So throwing that off the root and isolating seems like an easy win. Nesting Page specific content In a lot of my existing applications that are pure server side MVC application perhaps with some JavaScript associated with them , I tend to have page level javascript and css files. For these types of pages I actually prefer the local files stored in the same folder as the parent view. So typically I have a .css and .js files with the same name as the view in the same folder. This looks something like this: In order for this to work you have to also make a configuration change inside of the /Views/web.config file, as the Views folder is blocked with the BlockViewHandler that prohibits access to content from that folder. It’s easy to fix by changing the path from * to *.cshtml or *.vbhtml so that view retrieval is blocked:<system.webServer> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*.cshtml" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" /> </handlers> </system.webServer> With this in place, from inside of your Views you can then reference those same resources like this:<link href="~/Views/Admin/QuizPrognosisItems.css" rel="stylesheet" /> and<script src="~/Views/Admin/QuizPrognosisItems.js"></script> which works fine. JavaScript and CSS files in the Views folder deploy just like the .cshtml files do and can be referenced from this folder as well. Making this happen is not really as straightforward as it should be with just Visual Studio unfortunately, as there’s no easy way to get the file nesting from the VS IDE directly (you have to modify the .csproj file). However, Mads Kristensen has a nice Visual Studio Add-in that provides file nesting via a short cut menu option. Using this you can select each of the ‘child’ files and then nest them under a parent file. In the case above I select the .js and .css files and nest them underneath the .cshtml view. I was even toying with the idea of throwing the controller.cs files into the Views folder, but that’s maybe going a little too far :-) It would work however as Visual Studio doesn’t publish .cs files and the compiler doesn’t care where the files live. There are lots of options and if you think that would make life easier it’s another option to help group related things together. Are there any downside to this? Possibly – if you’re using automated minification/packaging tools like ASP.NET Bundling or Grunt/Gulp with Uglify, it becomes a little harder to group script and css files for minification as you may end up looking in multiple folders instead of a single folder. But – again that’s a one time configuration step that’s easily handled and much less intrusive then constantly having to search for files in your project. Client Side Folders The particular project shown above in the screen shots above is a traditional server side ASP.NET MVC application with most content rendered into server side Razor pages. There’s a fair amount of client side stuff happening on these pages as well – specifically several of these pages are self contained single page Angular applications that deal with 1 or maybe 2 separate views and the layout I’ve shown above really focuses on the server side aspect where there are Razor views with related script and css resources. For applications that are more client centric and have a lot more script and HTML template based content I tend to use the same layout for the server components, but the client side code can often be broken out differently. In SPA type applications I tend to follow the App folder approach where all the application pieces that make the SPA applications end up below the App folder. Here’s what that looks like for me – here this is an AngularJs project: In this case the App folder holds both the application specific js files, and the partial HTML views that get loaded into this single SPA page application. In this particular Angular SPA application that has controllers linked to particular partial views, I prefer to keep the script files that are associated with the views – Angular Js Controllers in this case – with the actual partials. Again I like the proximity of the view with the main code associated with the view, because 90% of the UI application code that gets written is handled between these two files. This approach works well, but only if controllers are fairly closely aligned with the partials. If you have many smaller sub-controllers or lots of directives where the alignment between views and code is more segmented this approach starts falling apart and you’ll probably be better off with separate folders in js folder. Following Angular conventions you’d have controllers/directives/services etc. folders. Please note that I’m not saying any of these ways are right or wrong  – this is just what has worked for me and why! Skipping Project Navigation altogether with Resharper I’ve talked a bit about project navigation in the project tree, which is a common way to navigate and which we all use at least some of the time, but if you use a tool like Resharper – which has Ctrl-T to jump to anything, you can quickly navigate with a shortcut key and autocomplete search. Here’s what Resharper’s jump to anything looks like: Resharper’s Goto Anything box lets you type and quick search over files, classes and members of the entire solution which is a very fast and powerful way to find what you’re looking for in your project, by passing the solution explorer altogether. As long as you remember to use (which I sometimes don’t) and you know what you’re looking for it’s by far the quickest way to find things in a project. It’s a shame that this sort of a simple search interface isn’t part of the native Visual Studio IDE. Work how you like to work Ultimately it all comes down to workflow and how you like to work, and what makes *you* more productive. Following pre-defined patterns is great for consistency, as long as they don’t get in the way you work. A lot of the default folder structures in Visual Studio for ASP.NET MVC were defined when things were done differently. These days we’re dealing with a lot more diverse project content than when ASP.NET MVC was originally introduced and project organization definitely is something that can get in the way if it doesn’t fit your workflow. So take a look and see what works well and what might benefit from organizing files differently. As so many things with ASP.NET, as things evolve and tend to get more complex I’ve found that I end up fighting some of the conventions. The good news is that you don’t have to follow the conventions and you have the freedom to do just about anything that works for you. Even though what I’ve shown here diverges from conventions, I don’t think anybody would stumble over these relatively minor changes and not immediately figure out where things live, even in larger projects. But nevertheless think long and hard before breaking those conventions – if there isn’t a good reason to break them or the changes don’t provide improved workflow then it’s not worth it. Break the rules, but only if there’s a quantifiable benefit. You may not agree with how I’ve chosen to divert from the standard project structures in this article, but maybe it gives you some ideas of how you can mix things up to make your existing project flow a little nicer and make it easier to navigate for your environment. © Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • How do I sort a hash table in javascript?

    - by Colen
    I have a javascript hash table, like so: var things = [ ]; things["hello"] = {"name" : "zzz I fell asleep", "number" : 7}; things["one"] = {"name" : "something", "number" : 18}; things["two"] = {"name" : "another thing", "number" : -2}; I want to sort these into order by name, so if I iterate through the hash table it will go in order another thing something zzz I fell asleep I tried doing this: function compareThings(thing1, thing2) { var name1 = thing1["name"].toLowerCase(); var name2 = thing2["name"].toLowerCase(); if (name1 < name2) { return -1; } if (name1 > name2) { return 1; } return 0; } things.sort(compareThings); But it doesn't seem to work. Edit: it occurs to me that perhaps a sorted hash table is an oxymoron. If so, what's the best way to get access to a sorted list of the things here?

    Read the article

  • Using XSLT, how do I separate nodes based on their value?

    - by Will Goring
    I have a pretty flat XML structure that I need to reorder into categorised sections and, for the life of me, I can't figure out how to do it in XSLT (not that I'm by any means an expert.) Basically, the original XML looks kinda like: <things> <thing> <value>one</value> <type>a</type> </thing> <thing> <value>two</value> <type>b</type> </thing> <thing> <value>thee</value> <type>b</type> </thing> <thing> <value>four</value> <type>a</type> </thing> <thing> <value>five</value> <type>d</type> </thing> </things> And I need to output something like: <data> <a-things> <a>one</a> <a>four</a> </a-things> <b-things> <b>two</b> <b>three</b> </b-things> <d-things> <d>five</d> </d-things> </data> Note that I can't output <c-things> if there aren't any <c> elements, but I do know ahead of time what the complete list of types is, and it's fairly short so handcoding templates for each type is definitely possible. It feels like I could probably hack something together using <xsl:if> and <xsl:for-each> but it also feels like there must be a more ... 'templatey' way to do it. Can anyone help? Cheers.

    Read the article

  • Why does coffeescript generate classes like this?

    - by ryeguy
    Given the following coffeescript code: class Animal constructor: (@name) -> speak: (things) -> "My name is #{@name} and I like #{things}" This is generated: var Animal = (function() { function Animal(name) { this.name = name; } Animal.prototype.speak = function(things) { return "My name is " + this.name + " and I like " + things; }; return Animal; })(); But why isn't this more idiomatic code generated? var Animal = function Animal(name) { this.name = name; }; Animal.prototype.speak = function(things) { return "My name is " + this.name + " and I like " + things; }; I know that coffeescript wraps a lot of stuff in anonymous functions to control scope leak, but what could leak here?

    Read the article

  • Python -- what is NOT in 2.7 that IS in 3.1? So many things have been back-ported, what is NOT?

    - by StuFuller
    I've been following the saga of Python 3.x and have watched the 3.x features gradually getting back-ported to the 2.x line. Most of the libraries I use haven't been ported and some (e.g. Twisted) seem covertly or overtly hostile to 3.x to varying degrees. At any rate, there has been very little movement towards compatible versions of many of them. Expecially the larger ones. So, my question is, with all the features that have been backported, what is still available in 3.x that's NOT been back-ported? It's pretty easy to find what has been backported, but not what's left. Right now, porting to 3.x just seems like all pain, and I can't see the gain; maybe an "Only in 3.x" list would let me see the light... Thanks, Stu

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >