Search Results

Search found 9124 results on 365 pages for 'general dba'.

Page 207/365 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • How can Highscores be more meaningful and engaging?

    - by Anselm Eickhoff
    I'm developing a casual Android game in which the player's success can very easily be represented by a number (I'm not more specific because I'm interested in the topic in general). Although I myself am not a highscore person at all, I was thinking of implementing a highscore for that game, but I see at least 2 problems in the classical leaderboard approach: very soon the highscore will be dominated by hardcore players, leaving no chance for beginners, who are then frustrated. This is very severe especially in casual games. there is no direct reward for being a loyal player who plays the game over and over again My current idea is to "reset" the highscore every 24 hours (for example) and each day nominate the "player of the day" who then gets a "star". Then there would be some kind of meta-highscore of players with the most stars. That way even beginners might have a chance to be "player of the day" once and continued or repeated play is rewarded much more. The idea is still very rough and there are many problems in the details and the technical implementation but I have a feeling it is a step in the right direction. Do you have creative and new ideas on how to implement highscores? Which games are doing this well / what types of highscores do you find most engaging?

    Read the article

  • How do you support your code post employment end?

    - by James
    What is the process for leaving a company (or even a group/division) in terms of code support? Is it best to handle all questions? Do you give the remaining developers access to yourself as a future resource? If so, is there a way to not give full access? I've experienced first hand where answers about the general software arthitecture from the initial developer would be invaluable. I understand that if serious assistance is needed, than it becomes a typical case of employment negotiation as a support contract. However, should serious assistance be required, what steps can you make to ease that process of contacting you? I was thinking of doing something like making a (YOUR_NAME)_codesupport @ (YOUR_FAVORITE_EMAIL_CLIENT).com address. My Situation Specifics: I'm a co-op student, and as such bounce around companies on 4-month stints. This means introducing myself to a lot of new code bases, as well as leaving a fair share of orphaned code behind when I leave a company. I feel bad if I leave junk code around.

    Read the article

  • Effect of using dedicated NVidia card instead of Intel HD4000

    - by Sman789
    Short version: Can someone please advise me of the effect of adding a dedicated NVIDIA GeForce GT 630M card to an Ubuntu laptop in terms of power consumption and performance gains/losses when doing general productivity tasks and booting up. Also, how good are the closed source, open source, and Bumblebee drivers for these newer cards compared to support for the Intel HD4000? Long version/Background, if any info here is helpful: I'm thinking of ordering a laptop from PC Specialist (a UK company who actually sell machines without Windows pre-installed) with the following specifications: Genesis IV: 15.6" AUO Matte 95% Gamut LED Widescreen (1920x1080) Intel® Core™i5 Dual Core Mobile Processor i5-3210M (2.50GHz) 3MB 4GB SAMSUNG 1600MHz SODIMM DDR3 MEMORY (1 x 4GB) 120GB INTEL® 520 SERIES SSD, SATA 6 Gb/s (upto 550MB/sR | 520MB/sW) Intel 2 Channel High Definition Audio + MIC/Headphone Jack GIGABIT LAN & WIRELESS INTEL® N135 802.11N (150Mbps) + BLUETOOTH Now, as I want this laptop mainly for work and not for games, I would be more than content with the HD4000 integrated chip which comes with the processor. However, for compatibility reasons, I am not able to get the specs I want unless I choose a NVIDIA GeForce GT 630M 1GB graphics card, which I don't have a great deal of use for. I'm willing to buy it, however, as it's still cheaper than any other laptop with the specs I want. However, I know that Linux power management isn't fantastic with open-source graphics drivers, and I don't much about Bumblebee. Basically, whilst I'm happy to 'tolerate' the card being there, I don't want to experience any negative effects on the rest of my system (battery, performance etc) and if there are likely to be any, I might reconsider my purchase. So if anyone can advise me on the effects, I would be very grateful, since I doubt I can just turn the card off. Thankyou for any assistance :)

    Read the article

  • Windows 8 App Downloads Increasing + Over 5,000 Apps Available

    - by David Paquette
    Windows 8 will be unleashed on the general public tomorrow and I thought it would be a good time to review some of the numbers I have been tracking over the last month. Downloads of Windows 8 Apps have been steadily increasing over the last month.  Below is screenshot from the App Summary page for my Windows 8 app.  The blue line is my app, while the orange line is average for the top 5 apps in that subcategory.  Considering the large gap between the 2, I think it is safe to assume that my app is NOT in the top 5 in the subcategory. The spike in the last couple of days is fairly dramatic and I am a little surprised by that.  I would have expected that kind of spike on the days following the official release as opposed to the days leading up to the release.   Finally, the all important App count.  There have been some stories floating around that the Window 8 Store is a ghost town and that there are no apps available.  I think these might be exaggerating the situation a little.  As of this morning, in the US store there are over 5000 apps available for download.  Obviously a far cry from the hundreds of thousands available in other app stores, but we are seeing solid growth in this number. Less than a month ago, that number was 2000. That means the store more than doubled in less than a month. If the growth continues, it won’t be long before the Widows 8 Store is filled with all the apps you need (and a whole lot you don’t need).

    Read the article

  • The Enterprise Side of JavaFX: Part Two

    - by Janice J. Heiss
    A new article, part of a three-part series, now up on the front page of otn/java, by Java Champion Adam Bien, titled “The Enterprise Side of JavaFX,” shows developers how to implement the LightView UI dashboard with JavaFX 2. Bien explains that “the RESTful back end of the LightView application comes with a rudimentary HTML page that is used to start/stop the monitoring service, set the snapshot interval, and activate/deactivate the GlassFish monitoring capabilities.”He explains that “the configuration view implemented in the org.lightview.view.Browser component is needed only to start or stop the monitoring process or set the monitoring interval.”Bien concludes his article with a general summary of the principles applied:“JavaFX encourages encapsulation without forcing you to build models for each visual component. With the availability of bindable properties, the boundary between the view and the model can be reduced to an expressive set of bindable properties. Wrapping JavaFX components with ordinary Java classes further reduces the complexity. Instead of dealing with low-level JavaFX mechanics all the time, you can build simple components and break down the complexity of the presentation logic into understandable pieces. CSS skinning further helps with the separation of the code that is needed for the implementation of the presentation logic and the visual appearance of the application on the screen. You can adjust significant portions of an application's look and feel directly in CSS files without touching the actual source code.”Check out the article here.

    Read the article

  • Does unit testing lead to premature generalization (specifically in the context of C++)?

    - by Martin
    Preliminary notes I'll not go into the distinction of the different kinds of test there are, there are already a few questions on these sites regarding that. I'll take what's there and that says: unit testing in the sense of "testing the smallest isolatable unit of an application" from which this question actually derives The isolation problem What is the smallest isolatable unit of a program. Well, as I see it, it (highly?) depends on what language you are coding in. Micheal Feathers talks about the concept of a seam: [WEwLC, p31] A seam is a place where you can alter behavior in your program without editing in that place. And without going into the details, I understand a seam -- in the context of unit testing -- to be a place in a program where your "test" can interface with your "unit". Examples Unit test -- especially in C++ -- require from the code under test to add more seams that would be strictly called for for a given problem. Example: Adding a virtual interface where non-virtual implementation would have been sufficient Splitting -- generalizing(?) -- a (smallish) class further "just" to facilitate adding a test. Splitting a single-executable project into seemingly "independent" libs, "just" to facilitate compiling them independently for the tests. The question I'll try a few versions that hopefully ask about the same point: Is the way that Unit Tests require one to structure an application's code "only" beneficial for the unit tests or is it actually beneficial to the applications structure. Is the generalization code need to exhibit to be unit-testable useful for anything but the unit tests? Does adding unit tests force one to generalize unnecessarily? Is the shape unit tests force on code "always" also a good shape for the code in general as seen from the problem domain? I remember a rule of thumb that said don't generalize until you need to / until there's a second place that uses the code. With Unit Tests, there's always a second place that uses the code -- namely the unit test. So is this reason enough to generalize?

    Read the article

  • Balancing dependency injection with public API design

    - by kolektiv
    I've been contemplating how to balance testable design using dependency injection with providing simple fixed public API. My dilemma is: people would want to do something like var server = new Server(){ ... } and not have to worry about creating the many dependencies and graph of dependencies that a Server(,,,,,,) may have. While developing, I don't worry too much, as I use an IoC/DI framework to handle all that (I'm not using the lifecycle management aspects of any container, which would complicate things further). Now, the dependencies are unlikely to be re-implemented. Componentisation in this case is almost purely for testability (and decent design!) rather than creating seams for extension, etc. People will 99.999% of the time wish to use a default configuration. So. I could hardcode the dependencies. Don't want to do that, we lose our testing! I could provide a default constructor with hard-coded dependencies and one which takes dependencies. That's... messy, and likely to be confusing, but viable. I could make the dependency receiving constructor internal and make my unit tests a friend assembly (assuming C#), which tidies the public API but leaves a nasty hidden trap lurking for maintenance. Having two constructors which are implicitly connected rather than explicitly would be bad design in general in my book. At the moment that's about the least evil I can think of. Opinions? Wisdom?

    Read the article

  • The Mac Tax

    - by Robert May
    One of our users was having difficulties with their mac and using some web software.  I decided to go peruse the landscape and see how much of a premium people were paying for their macs.  I priced out a Dell and a Mac from their websites.  I tried to get them as close to the same configuration, from a hardware standpoint, as I could.  I found the following: Apple Macbook Pro   Dell XPS 17 There are several important differences in the hardware: The mac doesn’t have a blueray player, but the dell does. The mac has a slightly slower processor. The mac claims to have a better battery, but doesn’t list the specifics, so there’s no way to tell. The mac doesn’t list the video card stats, so there’s no way to tell how comparable they are, but they’re probably close. The mac doesn’t come with any additional software.  No iWorks, iPhoto, etc.  They were left to their default of None, so arguably, the Dell is more functional out of the box. Other than changing the hardware specs to be close, all other configuration options were left at their default. So riddle me this, Batman:  Why do people buy Macs?  I have several dev buddies that own them, but I can’t justify the cost.  First, most of them load bootcamp and/or parallels at extra cost to run windows 7 and windows apps.  The hardware isn’t as good.  The price is almost twice as expensive. How do you justify the premium price? Technorati Tags: General

    Read the article

  • Should you always pass the bare minimum data needed into a function

    - by Anders Holmström
    Let's say I have a function IsAdmin that checks whether a user is an admin. Let's also say that the admin checking is done by matching user id, name and password against some sort of rule (not important). In my head there are then two possible function signatures for this: public bool IsAdmin(User user); public bool IsAdmin(int id, string name, string password); I most often go for the second type of signature, thinking that: The function signature gives the reader a lot more info The logic contained inside the function doesn't have to know about the User class It usually results in slightly less code inside the function However I sometimes question this approach, and also realize that at some point it would become unwieldy. If for example a function would map between ten different object fields into a resulting bool I would obviously send in the entire object. But apart from a stark example like that I can't see a reason to pass in the actual object. I would appreciate any arguments for either style, as well as any general observations you might offer. I program in both object oriented and functional styles, so the question should be seen as regarding any and all idioms.

    Read the article

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • Big level objects collision system for 2d game

    - by Aristarhys
    I read many variants today and get some knowledge in general, so here is a steps of mine thoughts in pictures (horrible paint.net ones). We need to develop grid system, so we check only thing near, perform simple check to cut out deep check, and at - last deep check like per-pixel collision check. Step 1 - Let p1, p2 are some sprites lets first just check with circle collision - because large distance between p1, p2 this fails and of course so we don't need test more deeply. But if we have not 2, but 20 objects, why we need to even circle test something so far outside of our view. Step 2 - Add basic column system, now we don't bother with p2 if it's in a column far from p1 column, so we even don't do circle test. But p3 is in the same col, so let do circle test, which of course will fail. Step 3 - Lets improve column system to the grid system with grid cell size just like p1, p2, p3 collision boxes, so we cut out things much top or below p1. And this is all great until comes BIG OBJs which is some kind of platforms. They are much bigger then grid cell. Circle test for will be successful, but deep check for whole big obj will fail And that the part I can't get. How do I store the grid position of big object? Like 4 grid coords for big object vertexes? And if one of them close to p1 do circle check for centre of big object then a deep one if succeed? Am I do it wrong? My possible solution:

    Read the article

  • Every file on cPanel got deleted (then restored hours later), and I have no idea why

    - by mcranston18
    I apologize in advance if I don't provide proper detail; I am new to server stuff and am looking for general advice about this issue: I was helping out a client doing web design last month. They have about a dozen static sites on one server. The sites are all built on Joomla, except one which I built on Wordpress. Everything was working fine last month when we did the redesign but all of a sudden this morning, every single file on their server got deleted: every web page, file, and all e-mail addresses. I phoned the hosting company (alliancewww.com) to ask, "why did every single file suddenly delete off the server?" They said, "because someone must have deleted it." I said, "well no one did." (Which I'm pretty damn sure no one did.) They said, "you can pay us to look into the problem." I authorized $150 for them to look into the problem. About an hour later, everything was magically re-instated. The host said they had a back-up of everything and just restored everything. What I'm wondering: Does anyone have recommendations of logs I can go through to investigate how the files got deleted in the first place? I've checked out their cPanel logs but found nothing. Is it likely that this is a mess-up on the host's part?

    Read the article

  • OAuth2 vs Public API

    - by Adam Tannon
    My understanding of OAuth (2.0) is that its a software stack and protocol to allow 2+ web apps to share information about a single end user. User A is a member of Site B and Site C; Site B wants to fetch some data from Site C about User A, and this is where OAuth steps in. So first off, if this assessment is incorrect, please begin by clarifying this for me and correcting me! Assuming I'm on the right track, then I guess I'm not seeing the need for OAuth to begin with (!). I'm sure I'm just not seeing the "forest through the trees" here, but the way I see it, couldn't Site C just expose a public API that Site B could use to fetch the same data (sans OAuth)? If Site C required user credentials to access the data, could this public API just use HTTPS for secure transport and require username/password as a part of each API call? Again, I'm sure I'm missing something, but I'm just not understanding why I would need OAuth when a secure, public API written and exposed by Site C seems more than capable of delivering what Site B needs regarding User A. In general, I'm looking for a set of guidelines to go by when deciding to choose between using OAuth for my web apps or just writing my own web service ( exposing public API). Thanks in advance!

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • Overloading methods that do logically different things, does this break any major principles?

    - by siva.k
    This is something that's been bugging me for a bit now. In some cases you see code that is a series of overloads, but when you look at the actual implementation you realize they do logically different things. However writing them as overloads allows the caller to ignore this and get the same end result. But would it be more sound to name the methods more explicitly then to write them as overloads? public void LoadWords(string filePath) { var lines = File.ReadAllLines(filePath).ToList(); LoadWords(lines); } public void LoadWords(IEnumerable<string> words) { // loads words into a List<string> based on some filters } Would these methods better serve future developers to be named as LoadWordsFromFile() and LoadWordsFromEnumerable()? It seems unnecessary to me, but if that is better what programming principle would apply here? On the flip side it'd make it so you didn't need to read the signatures to see exactly how you can load the words, which as Uncle Bob says would be a double take. But in general is this type of overloading to be avoided then?

    Read the article

  • Hash Algorithm Randomness Visualization

    - by clstroud
    I'm curious if anyone here has any idea how the images were generated as shown in this response: Which hashing algorithm is best for uniqueness and speed? Ian posted a very well-received response but I can't seem to understand how he went about making the images. I hate to make a new question dedicated to this, but I can't find any means to ask him more directly. On the other hand, perhaps someone has an alternative perspective. The best I can personally come up with would be to have it almost like a bar graph, which would illustrate how evenly the buckets of the hash table are being generated. I have a working Cocoa program that does this, but it can't generate anything like what he showed there. So the question is two fold I suppose: A) How does one truly interpret the data he shows? Is it more than "less whitespace = better"? B) How does one generate such an image based on some set of inputs, a hash, and an index? Perhaps I'm misunderstanding entirely, but I really would like to know more about this particular visualization technique. Or maybe I'm mis-applying this to hash tables rather than just hashes in general, but in that case I don't know how it would be "bounded" for the image.

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • Solaris at LISA 2011

    - by dminer
    As is our custom, the Solaris team will be out in force at the USENIX LISA conference; this year it's in Boston so it's sort of a home game for me for a change.  The big event we'll have is Tuesday, December 6, the Oracle Solaris 11 Summit Day.  We'll be covering deployment, ZFS, Networking, Virtualization, Security, Clustering, and how Oracle apps run best on Solaris 11.  We've done this the past couple of years and it's always a very full day.On Wednesday, December 7, we've got a couple of BOF sessions scheduled back-to-back.  At 7:30 we'll have the ever-popular engineering panel, with all of us who are speaking at Tuesday's summit day there for a free-flowing discussion of all things Solaris.  Following that, Bart & I are hosting a second BOF at 9:30 to talk more about deployment for clouds and traditional data centers.Also, on Wednesday and Thursday we'll have a booth at the exhibition where there'll be demos and just a general chance to talk with various Solaris staff from engineering and product management.The conference program looks great and I look forward to seeing you there!

    Read the article

  • State changes in entities or components

    - by GriffinHeart
    I'm having some trouble figuring how to deal with state management in my entities. I don't have trouble with Game state management, like pause and menus, since these are not handled as an entity component system; just with state in entities/components. Drawing from Orcs Must Die as an example, I have my MainCharacter and Trap entities which only have their components like PositionComponent, RenderComponent, PhysicsComponent. On each update the Entity will call update on its components. I also have a generic EventManager with listeners for different event types. Now I need to be able to place the traps: first select the trap and trap position then place the trap. When placing a trap it should appear in front of the MainCharacter, rendered in a different way and following it around. When placed it should just respond to collisions and be rendered in the normal way. How is this usually handled in component based systems? (This example is specific but can help figure out the general way to deal with entities states.)

    Read the article

  • What did Rich Hickey mean when he said, "All that specificity [of interfaces/classes/types] kills your reuse!"

    - by GlenPeterson
    In Rich Hickey's thought-provoking goto conference keynote "The Value of Values" at 29 minutes he's talking about the overhead of a language like Java and makes a statement like, "All those interfaces kill your reuse." What does he mean? Is that true? In my search for answers, I have run across: The Principle of Least Knowledge AKA The Law of Demeter which encourages airtight API interfaces. Wikipedia also lists some disadvantages. Kevlin Henney's Imperial Clothing Crisis which argues that use, not reuse is the appropriate goal. Jack Diederich's "Stop Writing Classes" talk which argues against over-engineering in general. Clearly, anything written badly enough will be useless. But how would the interface of a well-written API prevent that code from being used? There are examples throughout history of something made for one purpose being used more for something else. But in the software world, if you use something for a purpose it wasn't intended for, it usually breaks. I'm looking for one good example of a good interface preventing a legitimate but unintended use of some code. Does that exist? I can't picture it.

    Read the article

  • What level of detail to use in an interface members descriptions?

    - by famousgarkin
    I am extracting interfaces from some classes in .NET, and I am not completely sure about what level of detail of description to use for some of the interface members (properties, methods). An example: interface ISomeInterface { /// <summary> /// Checks if the object is checked out. /// </summary> /// <returns> /// Returns true if the object is checked out, or if the object locking is not enabled, /// otherwise returns false. /// </returns> bool IsObjectCheckedOut(); } class SomeImplementation : ISomeInterface { public bool IsObjectCheckedOut() { // An implementation of the method that returns true if the object is checked out, // or if the object locking is not enabled } } The part in question is the <returns>...</returns> section of the IsObjectCheckedOut description in the interface. Is it ok to include such a detail about return value in the interface itself, as the code that will work with the interface should know exactly what that method will do? All the current implementations of the method will do just that. But is it ok to limit the possible other/future implementations by description this way? Or should this not be included in the interface description, as there is no way to actually ensure that other/future implementations will do exactly this? Is it better to be as general as possible regarding the interface in such circumstances? I am currently inclined to the latter option.

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Session Evaluations

    - by BuckWoody
    I do a lot of public speaking. I write, teach, present and communicate at many levels. I love to do those things. And I love to get better at them. And one of the ways you get better at something is to get feedback on how you did. That being said, I have to confess that I really despise the “evaluations” I get at most venues. From college to technical events to other locations, at Microsoft and points in between, I find these things to be just shy of damaging, and most certainly useless. And it’s not always your fault. Ouch. That seems harsh. But let me ask you one question – and be as honest as you can with the answer – think about it first: “What is the point of a session evaluation?” I’m not saying there isn’t one. In fact, I think there’s a really important reason for them. In my mind, it’s really this: To make the speaker / next session better. Now, if you look at that, you can see right away that most session evals don’t accomplish this goal – not even a little. No, the way that they are worded and the way you (and I) fill them out, it’s more like the implied goal is this: Tell us how you liked this speaker / session. The current ones are for you, not for the speaker or the next person. It’s a popularity contest. Don’t get me wrong. I want to you have a good time. I want you to learn. I want (desperately, oh, please oh please) for you to like me. But in fact, that’s probably not why you went to the session / took the class / read that post. No, you want to learn, and to learn for a particular reason. Remember, I’m talking about college classes, sessions and other class environments here, not a general public event. Most – OK, all – session evaluations make you answer the second goal, not the first. Let’s see how: First, they don’t ask you why you’re there. They don’t ask you if you’re even qualified to evaluate the session or speaker. They don’t ask you how to make it better or keep it great. They use odd numeric scales that are meaningless. For instance, can someone really tell me the difference between a 100-level session and a 200-level one? Between a 400-level and a 500? Is it “internals” (whatever that means) or detail, or length or code, or what? I once heard a great description: A 100-level session makes me say, “wow - I’m smart.” A 500-level session makes me say “wow – that presenter is smart.” And just what is the difference between a 6 and a 7 answer on this question: How well did the speaker know the material? 1  2  3  4  5  6  7  8  9  10 Oh. My. Gosh. How does that make the next session better, or the speaker? And what criteria did you use to answer? And is a “10” better than a “1” (not always clear, and various cultures answer this differently). When it’s all said and done, a speaker basically finds out one thing from the current session evals: “They liked me. They really really liked me.” Or, “Wow. I think I may need to schedule some counseling for the depression I’m about to go into.” You may not think that’s what the speaker hears, but trust me, they do. Those are the only two reactions to the current feedback sheets they get. Either they keep doing what they are doing, or they get their feelings hurt. They just can’t use the information provided to do better. Sorry, but there it is. Keep in mind I do want your feedback. I want to get better. I want you to get your money and time’s worth, probably as much as any speaker alive. But I want those evaluations to be accurate, specific and actionable. I want to know if you had a good time, sure, but I also want to know if I did the right things, and if not, if I can do something different or better. And so, for your consideration, here is the evaluation form I would LOVE for you to use. Feel free to copy it and mail it to me any time. I’m going to put some questions here, and then I’ll even include why they are there. Notice that the form asks you a subjective question right away, and then makes you explain why. That’s work on your part. Notice also that it separates the room and the coffee and the lights and the LiveMeeting from the presenter. So many presenters are faced with circumstances beyond their control, and yet are rated high or low personally on those things. This form helps tease those apart. It’s not numeric. Numbers are easier for the scoring committees but are useless for you and me. So I don’t have any numbers. We’re actually going to have to read these things, not put them in a machine. Hey, if you put in the work to write stuff down, the least we could do is take the time to read it. It’s not anonymous. If you’ve got something to say, say it, and own up to it. People are not “more honest” when they are anonymous, they are less honest. So put your name on it. In fact – this is radical – I posit that these evaluations should be publicly available. Forever. Just like replies to a blog post. Hey, if I’m an organizer, I would LOVE to be able to have access to specific, actionable information on the attendees and the speakers. So if you want mine to be public, go for it. I’ll take the good and the bad. Enjoy. ------------------------------------------------------------------------------------------------------------------------------------------- Session Evaluation – Date, Time, Location, Topic Thanks for giving us your time today. We know that’s valuable, and we hope you learned something you can use from the session. If you can answer these questions as completely as you can, it will help the next person who attends a session here. Your Name: What you do for a living: (We Need your background to evaluate your evaluation) How long you have been doing that: (Again, we need your background to evaluate your evaluation) Paste Session Description Here: (This is what I said I would talk about) Did you like the session?                     No        Meh        Yes (General subjective question – overall “feeling”. You’ll tell us why in a minute.)  Tell us about the venue. Temperature, lights, coffee, or the online sound, performance, anything other than the speaker and the material. (Helps the logistics to be better or as good for the next person) 1. What did you expect to learn in this session? (How did you interpret that extract – did you have expectations that I should work towards for the next person?) 2. Did you learn what you expected to learn? Why? Be very specific. (This is the most important question there is. It tells us how to make the session better for someone like you.) 3. If you were giving this presentation, would you have done anything differently? What? (Helps us to gauge you, the listener, and might give us a great idea on how to do something better. Thanks!) 4. What will you do with the information you got? (Every presenter wants you to learn, and learn something useful. This will help us do that as well or better)  

    Read the article

  • What went wrong with my curl install?

    - by Danjah
    I'm fresher than the prince to Linux, I've been following the instructions here: http://chrisfulstow.com/running-node-js-on-windows-with-virtualbox-and-ubuntu (the link tells what I am generally trying to do). I'm all up and running in VBox, and am at the curl install part, I may have done the curl part a week ago I forget. So I ran this command anyway: danjah@danjah-VirtualBox:~$ sudo apt-get install curl Result: [sudo] password for danjah: Reading package lists... Done Building dependency tree Reading state information... Done curl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Then: $ curl http://npmjs.org/install.sh | sudo npm_install=rc sh Result: fetching: { gzip: stdin: unexpected end of file /bin/tar: Child returned status 1 /bin/tar: Error is not recoverable: exiting now It failed Should I be concerned? How can I test curl? How can I avoid these situations? Perhaps there's a generic way of checking to see if I've already installed packages/etc? Case specific answers and general advice most appreciated. cheers, d

    Read the article

  • What is a Coding Dojo?

    - by huwyss
    Recently i found out that there is a thing called "coding dojo". The point behind it is that software developers want to have a space to learn new stuff like processes, methods, coding details, languages, and whatnot in an environment without stress. Just for fun. No competition. No results required. No deadlines.Some days ago I joined the Zurich coding dojo. We were three programmers with different backgrounds.We gave ourselves the task to develop a method that takes an input value and returns its prime factors. We did pair programming and every few minutes we switched positions. We used test driven development. The chosen programming language was Ruby.I haven't really done TDD before. It was pretty interesting to see the algorithm develop following the testcases.We started with the first test input=1 then developed the most simple productive program that passed this very first test. Then we added the next test input=2 and implemented the productive code. We kept adding tests and made sure all tests are passed until we had the general solution.When we improved the performance of our code we saw the value of the tests we wrote before. Of course our first performance improvement broke several tests.It was a very interesting experience to see how other developers think and how they work. I will participate at the dojo again and can warmly recommend it to anyone. There are  coding dojos all over the world.Have fun!

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >