Search Results

Search found 3219 results on 129 pages for 'dallas fort worth'.

Page 65/129 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Is this kind of Design by Contract useless?

    - by Charlie Pigarelli
    I've just started informatics university and I'm attending a programming course about C(++). The programming professor prefers to teach very few things (in 3 month we have just reached the functions topic) and connect every topic with a type of programming design that somehow is similar to the Design by Contract design. Basically what he ask us to do is to write every exercise with comments Pre-conditions, Post-conditions and Invariants that should prove the correctness of each program we write. But this doesn't make any sense to me. I mean, ok: maybe writing down your thoughts prevent you from doing some mistakes, but if this is all an abstract thing, then if your program intuition is wrong you'll write your program wrong and then you'll also write pre and post conditions wrong probably auto convincing your self about its correctness. Most of the time, both me and other students have written programs that seemed ok and that had correct pre and post condition too. But at the moment of testing it was just completely wrong. I had some experience before this course of programming and I had written a lot of line of code before and I found myself comfortably with just writing a program and unit test it. It take less time to accomplish and is less "abstract" than just thinking about what every single piece of your program should do in every case (which is kinda like mentally testing it). Finally, all this pre and post conditions takes me like 80% of the total time of the exercise. It's harder to think about putting down this pre and post correct than to write the program itself. Since we are like the only course of the only university probably in the entire world that makes this things, could someone please tell me how should I manage this thing? Am I right thinking that this doesn't worth anything? Should I change university? (there are like double of the people attending that course and it seems that usually very few people passes the exam the first year). Should I convince myself it's method is right?

    Read the article

  • Which open source PHP project has the 'perfect' OOP design I can learn from?

    - by aditya menon
    I am a newbie to OOP, and I learn best by example. You could say this question is similar to Which Scala open source projects should I study to learn best coding practices - but in PHP. I have heard-tell that Symfony has the best 'architecture' (I will not pretend I know what that exactly means), as well as Doctrine ORM. Is it worth it to spend many months reading the source code of these projects, trying to deduce the patterns used and learning new tricks? I have seen equal number of web pages dissing and liking Zend's codebase (will provide links if deemed necessary). Do you know of any other project that would make any veteran OOP developer shed tears of joy? Please let me add that practicality and scope of use is not a concern at all here - I just want to do: Pick a project that has a codebase deemed awesome by devs way better and greater than me. Write code that achieves what the project does. Compare results and try to learn what I don't know. Basically, an academic interest codebase. Any recommendations please?

    Read the article

  • Column order can matter

    - by Dave Ballantyne
    Ordinarily, column order of a SQL statement does not matter. Select a,b,c from table will produce the same execution plan as   Select c,b,a from table However, sometimes it can make a difference.   Consider this statement (maxdop is used to make a simpler plan and has no impact to the main point):   select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc from sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) If you look at the execution plan, you will see similar to this That is three sorts.  One for RownAsc,  one for RownDesc and the final one for the ‘Order by’ clause.  Sorting is an expensive operation and one that should be avoided if possible.  So with this in mind, it may come as some surprise that the optimizer does not re-order operations to group them together when the incoming data is in a similar (if not exactly the same) sorted sequence.  A simple change to swap the RownAsc and RownDesc columns to produce this statement : select SalesOrderID, CustomerID, OrderDate, ROW_NUMBER() over (Partition By CustomerId order by OrderDate Desc) as RownDesc , ROW_NUMBER() over (Partition By CustomerId order by OrderDate asc) as RownAsc from Sales.SalesOrderHeader order by CustomerID,OrderDateoption(maxdop 1) Will result a different and more efficient query plan with one less sort. The optimizer, although unable to automatically re-order operations, HAS taken advantage of the data ordering if it is as required.  This is well worth taking advantage of if you have different sorting requirements in one statement. Try grouping the functions that require the same order together and save yourself a few extra sorts.

    Read the article

  • Your personal backlog

    - by johndoucette
    Whenever I start a new project or come in during a hectic time to help salvage a deliverable – there is always a backlog. Generating the backlog can be a daunting exercise, but worth the effort. Once I have a backlog, I feel in control and the chaos begins to quell. In your everyday life, you too should keep a backlog. Here is how I do it; 1. Always carry a notebook 2. Start each day marking a new page with today’s date 3. Flip to yesterday’s notes and copy every task with an empty checkbox next to it, to the new empty page (today) 4. As the day progresses and you go to meetings, do your work, or get interrupted to do something…jot it down in today’s page and put an empty checkbox next to it. If you get it done during the day, awesome. Mark it complete. Keep carrying and writing every task to each new day until it is complete. Maybe one day, you will have an empty backlog and your sprint will be complete!

    Read the article

  • Should I give the answer to a failed interview coding exercise?

    - by GlenH7
    We had a senior level interview candidate fail a nuance of the FizzBuzz question*. I mean, really, utterly, completely, failed the question - not even close. I even coached him through to thinking about using a loop and that 3 and 5 were really worth considering as special cases. He blew it. Just for QA purposes, I gave the same exact question to three teammates; gave them 5 minutes; and then came back to collect their pseudo-code. All of them nailed it and hadn't seen the question before. Two asked what the trick was... On a different logic exercise, the candidate showed some understanding of some of the features available within the language he chose to use (C#). So it's not as if he had never written a line of code. But his logic still stunk. My question is whether or not I should have given him the answer to the logic questions. He knew he blew them, and acknowledged it later in the interview. On the other hand, he never asked for the answer or what I was expecting to see. I know coding exercises can be used to set candidates up for failure (again, see second link from above). And I really tried to help him home in on answering the core of the question. But this was a senior level candidate and Fizz-Buzz is, frankly, ridiculously easy even with accounting for interview jitters. I felt like I should have shown him a way of solving the problem so that he could at least learn from the experience. But again, he didn't ask. What's the right way to handle that situation? *Okay, that's not the link to the actual FizzBuzz question, but it is a good P.SE discussion around FizzBuzz and links to the various aspects of it.

    Read the article

  • Struggling to get set up with JOGL2.0

    - by thecoshman
    I guess that Game Dev' is a more sensible place for my problem then SO. I did have JOGL1.1 set up and working, but I soon discovered that it did not support the latest OpenGL, so I started work on upgrading to JOGL2.0 it's not gone too well. Firstly, is it worth me trying to get JOGL to work, or should I just move over to LWJGL? I am fairly comfortable with OpenGL (via C++) and from what I did get working with JOGL1.1, I seem to be OK adapting to it. Assuming that I stick with JOGL, am I foolish for trying to use JOGL2.0? From what I can gather, JOGL2.0 is still in beta, but I am willing to go with it as I want to make use of the latest OpenGL I can. I have been using the Eclipse IDE and have set up a user library for JOGL, here is a screen shot of the configuration and I have added this user library to my own Eclipse project. the system variable %JOGL_HOME% points to "C:\Users\edacosh\Downloads\JOGL2.0" so that should work fine. Now, the problem I actually having, when I try to run my code, on the line GLProfile glp = GLProfile.getDefault(); The code stops with the following message... Exception in thread "main" java.lang.NoClassDefFoundError: com/jogamp/common/jvm/JVMUtil at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1145) at DiCE.DiCE.<init>(DiCE.java:33) at App.<init>(App.java:17) at App.main(App.java:12) Caused by: java.lang.ClassNotFoundException: com.jogamp.common.jvm.JVMUtil at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 4 more I have also set my project to ensure that it is using jre6 along with jdk6, as I was having some issues. I hope I have given you enough information to be able to help me. It probably doesn't help that I am rather new to Java, been developing in C++ for ages. Thanks

    Read the article

  • Tessellating to a curve?

    - by Avi
    I'm creating a game engine, and I'm trying to define a 3D model format I want to use. I haven't come across a format that quite does what I want. My game engine assumes a shader model 5+ environment. By the time I'm finished with it, that won't be a very unreasonable requirement. Because it assumes such a modern environment, I'm going to try and exploit tessellation. The most popular way, it seems, to procedurally increase geometry through tessellation is to tessellate to a height map. This works for a lot of things, but has limitations in that height maps still use up VRAM and also only have finite scalability. So I want to be able to use curves to define what a mesh should tessellate to. The thing is, I have no idea what definition of curves I should use, how I should store it, and how I should tessellate to it. Do I use NURBS curves? Bezier? Hermite? And once I figure that out, is there an algorithm to determine how the tessellation shader should produce and move vertices to match the curve as closely as possible? Is the infinite scalability and lower memory usage when compared to height maps worth the added computational complexity? I'm sorry I'm kind if ignorant as to these matters. I just don't know where to start.

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

  • a young intellect asks: Python or Ruby for freelance?

    - by Sophia
    Hello, I'm Sophia. I have an interest in self-learning either Python, or Ruby. The primary reason for my interest is to make my life more stable by having freelance work = $. It seems that programming offers a way for me to escape my condition of poverty (I'm on the edge of homelessness right now) while at the same time making it possible for me to go to uni. I intend on being a math/philosophy major. I have messed with Python a little bit in the past, but it didn't click super well. The people who say I should choose Python say as much because it is considered a good first language/teaching language, and that it is general-purpose. The people who say I should choose Ruby point out that I'm a very right-brained thinker, and having multiple ways to do something will make it much easier for me to write good code. So, basically, I'm starting this thread as a dialog with people who know more than I do, as an attempt to make the decision. :-) I've thought about asking this in stackoverflow, but they're much more strict about closing threads than here, and I'm sort of worried my thread will be closed. :/ TL;DR Python or Ruby for freelance work opportunities ($) as a first language? Additional question (if anyone cares to answer): I have a personal feeling that if I devote myself to learning, I'd be worth hiring for a project in about 8 weeks of work. I base this on a conservative estimate of my intellectual capacities, as well as possessing motivation to improve my life. Is my estimate necessarily inaccurate? random tidbit: I'm in Portland, OR I'll answer questions that are asked of me, if I can help the accuracy and insight contained within the dialog.

    Read the article

  • I'm a student learning C++ and I've recently found out about Ruby. Would learning (some of) Ruby help me with C++ or would it just confuse me?

    - by Von32
    Hi! As the title says, I'm a student that will be starting my second year of C++ very soon. I've discovered Ruby, however. While I've heard much buzz about the language before, I've disregarded it because I always thought it wasn't something that would be useful. However, I've found a number of FANTASTIC tutorials on ruby and am interested in learning it (probably because it seems so straightforward). Would playing around with ruby be a good or bad idea? I understand that there's not such thing as bad knowledge, but I'm afraid that Ruby will only confuse me when dealing with C++. How different from C++ is it? I've read it's based on C in some way, but my google-fu seems to be horrible today. How useful is Ruby in the real world? I'm not specifically asking about jobs- I'm more interested in what sort of applications may come from this language. Any specific examples worth looking at? Going back to Question two- I've read some posts on here that Ruby and C++ can hold hands once in a while. How flexible is this relationship? Is it rarely that this would work? Thank you Very much for your time! EDIT: This has to be the one community on the internet that doesn't suck. Why have I never posted before? You guys are awesome!

    Read the article

  • Blank screen after Switch User or Resume

    - by matt wilkie
    About half the time when I switch users or resume from standby or resume the screen goes blank (black). If I work the cursor keys I can hear the system bell when it gets to the end of the user list. I can also successfully login, going from memory, but screen stays black. Sometimes closing and re-opening the lid will light up the screen again. Pressing the special Function key to enable/disable external monitor connection has no effect [Fn]-[F5],[Fn]-[F6]. If none of the previous work I need to put the computer into hibernation or full power off to restore screen function. If I watch closely when switching users I think I can see the screen initially start to light up and then quickly fade to black. The computer is an Acer Aspire 3500, model ZL6, running Ubuntu 10.10 installed 2 days ago. No proprietary drivers are in use. I'll provide a list of hardware details as soon as I can figure out how to generate that (didn't there used to be an entry for hardware details under the System menu?). Possibly related questions: No resume after Hibernate or Standby When I resume from suspension - the screen is blank Switch user fails to complete successfully For what it's worth, blank after resume also used to happen occasionally when the laptop was running XP-Home, but nowhere near as often, perhaps 6 or 8 times a year. UPDATE: I found System Administration System Testing and ran the Monitor test. It went very very dark, but the window elements could be discerned, and the whole screen flashed (from very very dark to black). On the third repeat of that same test the screen went to full blaupck and stayed there. Moving the mouse, via touchpad, or touch keys did not wake it up again. I had to close the lid and put the computer into hibernate, and press the power button to restore it. UPDATE2: output of lshw: http://pastebin.com/q7n8676r, lspci: http://pastebin.com/6ujzVK4r UPDATE3: sometimes I can restore the screen by flipping to console 1 with ctrl-alt-F1 and then back to graphical with ctrl-alt-F7.

    Read the article

  • from Java to SAS

    - by Giovanni Rossi
    I am a seasoned python,java,...other programmer having a (fairly advanced) mathematical education (so I do understand statistics and data mining, for example) . For various reasons I am thinking to switch to SAS/BI area (I am naming SAS because it might be, for me, a possible way to enter in BI). My question, for whoever might have an experience of both: is it, in BI current state, worth it? I mean, the days of big ideas in BI for business seem to be over (there are the APIs, managers think that they know what you can do with them), and my mathematical background might turn out to be superflous. Also, the big companies now have their data organized, have their BI procedures well established, and trying to analyze it from a different standpoint might not be what they want. Another difference is: while in Java etc. development one codes and codes and codes, I don't know if this is the case for BI; in fact, from what I read on the net, a BI (or OLAP, ...etc) developer, in a big organization, is usually in a state of standby, and does in fact little coding. Any opinions, and in particular strong opinions, will be appreciated.

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • Enjoy the 22nd 2012 Ig Nobel Awards Ceremony [Video]

    - by Jason Fitzpatrick
    Last night was the 22nd Ig Nobel award ceremony. If you weren’t there to experience the festivities first hand, don’t despair–you can watch the entire ceremony here. If you’re unfamiliar with the Ig Nobel awards Improbable Research, the group behind the awards, is happy to explain: The Ig Nobel Prizes honor achievements that first make people laugh, and then make them think. The prizes are intended to celebrate the unusual, honor the imaginative — and spur people’s interest in science, medicine, and technology. Every year, in a gala ceremony in Harvard’s Sanders Theatre, 1200 splendidly eccentric spectators watch the winners step forward to accept their Prizes. These are physically handed out by genuinely bemused genuine Nobel laureates. Check out the above video to see the awards ceremony (jump to around the 50:00 mark to skip the setup phase) or hit up the link below to read about the 2012 winners. The 2012 Ig Nobel Prize Winners How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Can't use the hardware scissor any more, should I use the stencil buffer or manually clip sprites?

    - by Alex Ames
    I wrote a simple UI system for my game. There is a clip flag on my widgets that you can use to tell a widget to clip any children that try to draw outside their parent's box (for scrollboxes for example). The clip flag uses glScissor, which is fed an axis aligned rectangle. I just added arbitrary rotation and transformations to my widgets, so I can rotate or scale them however I want. Unfortunately, this breaks the scissor that I was using as now my clip rectangle might not be axis aligned. There are two ways I can think of to fix this: either by using the stencil buffer to define the drawable area, or by having a wrapper function around my sprite drawing function that will adjust the vertices and texture coords of the sprites being drawn based on the clipper on the top of a clipper stack. Of course, there may also be other options I can't think of (something fancy with shaders possibly?). I'm not sure which way to go at the moment. Changing the implementation of my scissor functions to use the stencil buffer probably requires the smallest change, but I'm not sure how much overhead that has compared to the coordinate adjusting or if the performance difference is even worth considering.

    Read the article

  • Nvidia optimus and Steam (on 12.04)

    - by Seiryuu
    I've obtained a copy of the .deb for the Steam beta, but it was pretty disappointing to see that it simply doesn't run. Hardware - Dell XPS L502, with Nvidia Optimus I have bumblebee installed. Trying to run Steam with the Intel HD 3000 completely fails to start it. Message received Installing breakpad exception handler for appid(steam)/version(1352224866_client) followed by a crash with no other information provided. Trying to optirun steam runs the client, but as soon as it gets to the home screen, it says that the Nvidia drivers I am using are out of date (and Steam requires newer drivers to run). It's probably worth to note that it throws the same Installing breakpad... error when run with optirun, but it doesn't crash the client immediately. Any way to fix this? Also, is there a way of manually updating the drivers in bumblebee without breaking anything? Alternatively, is there a reliable way of completely disabling the Intel GPU (in order to use the Nvidia GPU exclusively)? Note: I am using Xmonad with gnome-fallback, if that makes a difference. However, when I tried everything mentioned with Unity (2d), everything was the same, so I guess it has nothing to do with the window manager in use.

    Read the article

  • Regarding sprite design and resolution for tablets and phones

    - by Dimitris P.
    I am about to start working on a game for android devices, in my spare time, to get familiar with android development. I'm more interested in using the best practices possible than getting a quick result, and that is why I need some guidance regarding graphics. I think the game is going to be fully sprite based. Everything is going to be in .bmp form, or something similar, and my question is: Should I design the sprites in a small resolution (ie for phone screens) and scale them up to fit into larger screens (tablet screens), should I do it vice-versa or should I consider a completely different approach? Would designing a different set of sprites for each of the most used resolution settings be worth it or are there simpler solutions to the problem with fewer drawbacks than the ones I mentioned above? (If I follow the first approach, for example, the larger the screen the worse the graphics will get, since every pixel of the original drawing will cover several pixels on the screen). Is there a standard approach for dealing with this kind of problems? If you need me to be more detailed or more clear about something I mentioned (or forgot to) please don't hesitate to ask. Also, excuse me for any inaccurate use of the English language. Thank you in advance for your input.

    Read the article

  • Eculidean space and vector magnitude

    - by Starkers
    Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance. Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me: Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2! So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude. Here are two points, purple at (2,2) and green at (3,3). We can take two points away from each other to derive a vector. Let's create a vector to describe the magnitude and direction of purple from green: |d| = purple - green |d| = (purple.x, purple.y) - (green.x, green.y) |d| = (2, 2) - (3, 3) |d| = <-1,-1> Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement: euc_magnitude = sqrt((x*x)+(y*y)) euc_magnitude = sqrt((-1*-1)+(-1*-1)) euc_magnitude = sqrt((1)+(1)) euc_magnitude = sqrt(2) euc_magnitude = 1.41 Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple. But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit: So what do we do to the magnitude to allow us to calculate real distances on our point graph? Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • I'm a student learning C++ and I've recently found out about Ruby. Would learning (some of) Ruby help me with C++ or would it just confuse me?

    - by Von32
    As the title says, I'm a student that will be starting my second year of C++ very soon. I've discovered Ruby, however. While I've heard much buzz about the language before, I've disregarded it because I always thought it wasn't something that would be useful. However, I've found a number of FANTASTIC tutorials on ruby and am interested in learning it (probably because it seems so straightforward). Would playing around with ruby be a good or bad idea? I understand that there's not such thing as bad knowledge, but I'm afraid that Ruby will only confuse me when dealing with C++. How different from C++ is it? I've read it's based on C in some way. I've read some posts on here that Ruby and C++ can hold hands once in a while. How flexible is this relationship? Is it rarely that this would work? How useful is Ruby in the real world? I'm not specifically asking about jobs- I'm more interested in what sort of applications may come from this language. Any specific examples worth looking at?

    Read the article

  • How do I Series: Connecting an Expression Blend Project to Team Foundation Server

    - by Enrique Lima
    I have heard of people wanting and needing to add projects created in Expression Blend to Team Foundation Server. Here is the recipe: 1) Create your project in Expression Blend … click OK 2) Select the option to open your recently created project in Visual Studio. Once that option is selected, your solution will open up in Visual Studio, close Expression Blend at this point. Now, I want to add this project to Source Control … Next, I connect to my TFS environment, and pick the location to save my project Once the project is added, I will get a status window of pending changes for my project, all that we are left to do is to check in those changes. Since we have checked in our project, we can now close Visual Studio, and we will proceed to open Expression Blend again. And select our project we will! We notice some differences from before, just by opening it What differences you say?!? Notice the lock to the right of the item name … And we also get this when we right click … And there we have it, it is a combination of tools to achieve this, but it is well worth it.

    Read the article

  • Is it useful to unit test methods where the only logic is guards?

    - by Vaccano
    Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.

    Read the article

  • Is Intellisense faster in Visual Studio 2012 compared to Visual Studio 2010 for C++ projects?

    - by syplex
    We switched to VS2010 from VS2003 a few months ago, and there are many many improvements. But the speed of Intellisense is not one of them (although it does generate higher quality results, which is great). I read that Intellisense and the MSDN help system were being improved in VS2012, so I'm curious if its actually faster? The only data I could find were graphs of an early release (VS2011). For the record, I am using a vanilla install of VS2010 with SP1 on Windows 7 SP1 (x64). No plugins or add-ins running. What I'm looking for specifically: Has the speed of intellisense autocomplete improved? Has the speed of F12 (goto definition) improved? The answers to these questions will help in determining if VS2012 is worth the money to upgrade at this time as the intellisense slowness would be the only major reason for upgrading. I'd also be interested in knowing if the help system has improved. I'm currently using MSDN help from VS2008SP1 because it has filtering and is faster.

    Read the article

  • Enigmail - how to encrypt only part of the message?

    - by Lukasz Zaroda
    When I confirmed my OpenPGP key on launchpad I got a mail from them, that was only partially encrypted with my key (only few paragraphs inside the message). Is it possible to encrypt only chosen part of the message with Enigmail? Or what would be the easiest way to accomplish it? Added #1: I found a pretty convenient way for producing ASCII armoured encrypted messages by using Nautilus interface (useful for ones that for some reason doesn't like to work with terminal). You need to install Nautilus-Actions Configuration Tool, and add there a script with a name eg. "Encrypt in ASCII" and parameters: path: gpg parameters: --batch -sear %x %f The trick is that now you can create file, with extension that would be name of your recipient, you can then fill it with your message, right click it in Nautilus, choose "Encrypt in ASCII", and you will have encrypted ascii file which content you can (probably) just copy to your message. But if anybody knows more convenient solution please share it. Added #1B: In the above case if you care more about security of your messages, It's worth to turning off invisible backup files that gedit creates every time, you create new document, or just remember to delete them.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >