Search Results

Search found 261 results on 11 pages for 'timothy miller'.

Page 2/11 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • ~/.bashrc return can only 'return' from a function or sourced script

    - by Timothy
    I am trying to setup a OpenStack box to have a look at OpenStack Object Storage (Swift). Looking through the web I found this link; http://swift.openstack.org/development_saio.html#loopback-section I followed the instructions line by line but stuck on step 7 in the "Getting the code and setting up test environment" section. When I execute ~/.bashrc I get; line 6: return: can only 'return' from a function or sourced script. Below is the Line 6 extract from ~/.bashrc. My first reaction is to comment this line out, but I dont know what it does. Can anyone help? #If not running interactively, dont't do anything [ -z "$PS1" ] && return I'm running Ubuntu 12.04 as a VM on Hyper-v if knowing that is useful.

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • Finding shortest path on a hexagonal grid

    - by Timothy Mayes
    I'm writing a turn based game that has some simulation elements. One task i'm hung up on currently is with path finding. What I want to do is, each turn, move an ai adventurer one tile closer to his target using his current x,y and his target x,y. In trying to figure this out myself I can determine 4 directions no problem by using dx = currentX - targetY dy = currentY - targetY but I'm not sure how to determine which of the 6 directions is actually the "best" or "shortest" route. For example the way its setup currently I use East, West, NE, NW, SE, SW but to get to the NE tile i move East then NW instead of just moving NW. I hope this wasn't all rambling. Even just a link or two to get me started would be nice. Most of the info i've found is on drawing the grids and groking the wierd coordinate system needed. Thanks in advance Tim

    Read the article

  • C4C - 2012

    - by Timothy Wright
    C4C, in Kansas City, is always a fun event. At points it gets to be a pressure cooker as you zone in trying to crank out some fantastic code in just a few hours, but it is always fun. A great challenge of your skill as a software developer and for a good cause. This year my team helped The United Cerebral Palsy of Greater Kansas City organization to add online job applications and a database for tracking internal training. I keep finding that there is one key rule to pulling off a successful C4C weekend project, and that is “Keep It Simple”. Each time you want to add that one cool little feature you have to ask yourself.. Is it really necessary? and Do I have time for that? And if you are going to learn something new you should ask yourself if you’re really going to be able to learn that AND finish the project in the given time. Sometimes the less elegant code is the better code if it works. That said… You get a great amount of freedom to build the solution the way you want. Typically, the software we build for the charities will save them a lot of money and time and make their jobs easier. You are able to build the software you know you are capable of creating from your own ideas. I highly recommend any developers in the area to signup next year and show off your skills. I know I will!

    Read the article

  • How to make a ball fall faster on a ramp? Unity3D/C#

    - by Timothy Williams
    So, I'm making a ball game. Where you pick up the ball, drop it on a ramp, and it flies off in to blocks. The only problem right now is it falls at a normal speed, then lightly falls off, not nearly fast enough to get over the wall and hit the blocks. Is there any way to make the ball go faster down the ramp? Maybe even make it go faster depending on what height you dropped it from (e.g. if you hold it way above the ramp, and drop it, it will drop faster than if you dropped it right above the ramp.) Thanks.

    Read the article

  • Certifications in the new Certify

    - by richard.miller
    The most up-to-date certifications are now available in Certify - New Additions Feb 2011! What's not yet available can still be found in Classic Certify. We think that the new search will save you a ton of time and energy, so try it out and let us know. NOTE: Not all cert information is in the new system. If you type in a product name and do not find it, send us feedback so we can find the team to add it!. Also, we have been listening to every feedback message coming in. We have plans to make some improvements based on your feedback AND add the missing data. Thanks for your help! Japanese ??? Note: Oracle Fusion Middleware certifications are available via oracle.com: Fusion Middleware Certifications. Certifications viewable in the new Certify Search Oracle Database Oracle Database Options Oracle Database Clients (they apply to both 32-bit and 64-bit) Oracle Enterprise Manager Oracle Beehive Oracle Collaboration Suite Oracle E-Business Suite, Now with Release 11i & 12! Oracle Siebel Customer Relationship Management (CRM) Oracle Governance, Risk, and Compliance Management Oracle Financial Services Oracle Healthcare Oracle Life Sciences Oracle Enterprise Taxation Management Oracle Retail Oracle Utilities Oracle Cross Applications Oracle Primavera Oracle Agile Oracle Transportation Management (G-L) Oracle Value Chain Planning Oracle JD Edwards EnterpriseOne (NEW! Jan 2011) 8.9+ and SP23+ Oracle JD Edwards World (A7.3, A8.1, A9.1, and A9.2) Certifications viewable in Classic Certify Classic certify is the "old" user interface. Clicking the "Classic Certify" link from Certifications QuickLinks will take you there. Enterprise PeopleTools Release 8.49, Release 8.50, and Release 8.51 Other Resources See the Tips and Tricks for the new Certify. Watch the 4 minute introduction to the new certify. Or how to get the most out of certify with a advanced searching and features demo with the new certify. =0)document.write(unescape('%3C')+'\!-'+'-') //--

    Read the article

  • ASP.NET MVC 3 Hosting :: Error Handling and CustomErrors in ASP.NET MVC 3 Framework

    - by C. Miller
    So, what else is new in MVC 3? MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. What does that mean? This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes. How does that work out? Think of it as a routing table specifically for your Exceptions, it's pretty sweet! Global Filters The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute());     } HandleErrorAttributes The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale. The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute         {             ExceptionType = typeof(DbException),             // DbError.cshtml is a view in the Shared folder.             View = "DbError",             Order = 2         });         filters.Add(new HandleErrorAttribute());     }Error Views All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo @model System.Web.Mvc.HandleErrorInfo           @{     ViewBag.Title = "DbError"; } <h2>A Database Error Has Occurred</h2> @if (Model != null) {     <p>@Model.Exception.GetType().Name<br />     thrown in @Model.ControllerName @Model.ActionName</p> }Errors Outside of the MVC Pipeline The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc. It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline. <system.web>  <customErrors mode="On" defaultRedirect="~/error">     <error statusCode="404" redirect="~/error/notfound"></error>  </customErrors>Sample Controllers public class ExampleController : Controller {     public ActionResult Exception()     {         throw new ArgumentNullException();     }     public ActionResult Db()     {         // Inherits from DbException         throw new MyDbException();     } } public class ErrorController : Controller {     public ActionResult Index()     {         return View();     }     public ActionResult NotFound()     {         return View();     } } Putting It All Together If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out: 1.       A controller action throws an Exception. You will remain on the current page and the global HandleErrorAttributes will render the Error view. 2.       A controller action throws any type of DbException. You will remain on the current page and the global HandleErrorAttributes will render the DbError view. 3.       Go to a non-existent page. You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404. But don't take my word for it, download the sample project and try it yourself. Three Important Lessons Learned For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for: 1) Error views have models, but they must be of type HandleErrorInfo. It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception! 2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action. The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller. That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page. 3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects. If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors. In Summary The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes.

    Read the article

  • How to make a ball fall faster on a ramp?

    - by Timothy Williams
    So, I'm making a ball game. Where you pick up the ball, drop it on a ramp, and it flies off in to blocks. The only problem right now is it falls at a normal speed, then lightly falls off, not nearly fast enough to get over the wall and hit the blocks. Is there any way to make the ball go faster down the ramp? Maybe even make it go faster depending on what height you dropped it from (e.g. if you hold it way above the ramp, and drop it, it will drop faster than if you dropped it right above the ramp.)

    Read the article

  • Randomly placing items script not working - sometimes items spawn in walls, sometimes items spawn in weird locations

    - by Timothy Williams
    I'm trying to figure out a way to randomly spawn items throughout my level, however I need to make sure they won't spawn inside another object (walls, etc.) Here's the code I'm currently using, it's based on the Physics.CheckSphere(); function. This runs OnLevelWasLoaded(); It spawns the items perfectly fine, but sometimes items spawn partway in walls. And sometimes items will spawn outside of the SpawnBox range (no clue why it does that.) //This is what randomly generates all the items. void SpawnItems () { if (Application.loadedLevelName == "Menu" || Application.loadedLevelName == "End Demo") return; //The bottom corner of the box we want to spawn items in. Vector3 spawnBoxBot = Vector3.zero; //Top corner. Vector3 spawnBoxTop = Vector3.zero; //If we're in the dungeon, set the box to the dungeon box and tell the items we want to spawn. if (Application.loadedLevelName == "dungeonScene") { spawnBoxBot = new Vector3 (8.857f, 0, 9.06f); spawnBoxTop = new Vector3 (-27.98f, 2.4f, -15); itemSpawn = dungeonSpawn; } //Spawn all the items. for (i = 0; i != itemSpawn.Length; i ++) { spawnedItem = null; //Zeroes out our random location Vector3 randomLocation = Vector3.zero; //Gets the meshfilter of the item we'll be spawning MeshFilter mf = itemSpawn[i].GetComponent<MeshFilter>(); //Gets it's bounds (see how big it is) Bounds bounds = mf.sharedMesh.bounds; //Get it's radius float maxRadius = new Vector3 (bounds.extents.x + 10f, bounds.extents.y + 10f, bounds.extents.z + 10f).magnitude * 5f; //Set which layer is the no walls layer var NoWallsLayer = 1 << LayerMask.NameToLayer("NoWallsLayer"); //Use that layer as your layermask. LayerMask layerMask = ~(1 << NoWallsLayer); //If we're in the dungeon, certain items need to spawn on certain halves. if (Application.loadedLevelName == "dungeonScene") { if (itemSpawn[i].name == "key2" || itemSpawn[i].name == "teddyBearLW" || itemSpawn[i].name == "teddyBearLW_Admiration" || itemSpawn[i].name == "radio") randomLocation = new Vector3(Random.Range(spawnBoxBot.x, -26.96f), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, -2.141f)); else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(-2.374f, spawnBoxTop.z)); } //Otherwise just spawn them in the box. else randomLocation = new Vector3(Random.Range(spawnBoxBot.x, spawnBoxTop.x), Random.Range(spawnBoxBot.y, spawnBoxTop.y), Random.Range(spawnBoxBot.z, spawnBoxTop.z)); //This is what actually spawns the item. It checks to see if the spot where we want to instantiate it is clear, and if so it instatiates it. Otherwise we have to repeat the whole process again. if (Physics.CheckSphere(randomLocation, maxRadius, layerMask)) spawnedItem = Instantiate(itemSpawn[i], randomLocation, Random.rotation); else i --; //If we spawned something, set it's name to what it's supposed to be. Removes the (clone) addon. if (spawnedItem != null) spawnedItem.name = itemSpawn[i].name; } } What I'm asking for is if you know what's going wrong with this code that it would spawn stuff in walls. Or, if you could provide me with links/code/ideas of a better way to check if an item will spawn in a wall (some other function than Physics.CheckSphere). I've been working on this for a long time, and nothing I try seems to work. Any help is appreciated.

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • Certifications in the new Certify - March 2011 Update

    - by richard.miller
    The most up-to-date certifications are now available in Certify - New Additions March 2011! What's not yet available can still be found in Classic Certify. We think that the new search will save you a ton of time and energy, so try it out and let us know. NOTE: Not all cert information is in the new system. If you type in a product name and do not find it, send us feedback so we can find the team to add it!.Also, we have been listening to every feedback message coming in. We have plans to make some improvements based on your feedback AND add the missing data. Thanks for your help!Japanese ???Note: Oracle Fusion Middleware certifications are available via oracle.com: Fusion Middleware Certifications.Certifications viewable in the new Certify SearchEnterprise PeopleTools Release 8.50, and Release 8.51 Added March 2011!Oracle DatabaseOracle Database OptionsOracle Database Clients (they apply to both 32-bit and 64-bit)Oracle BeehiveOracle Collaboration SuiteOracle E-Business Suite, Now with Release 11i & 12!Oracle Siebel Customer Relationship Management (CRM)Oracle Governance, Risk, and Compliance ManagementOracle Financial ServicesOracle HealthcareOracle Life SciencesOracle Enterprise Taxation ManagementOracle RetailOracle UtilitiesOracle Cross ApplicationsOracle PrimaveraOracle AgileOracle Transportation Management (G-L)Oracle Value Chain PlanningOracle JD Edwards EnterpriseOne (NEW! Jan 2011) 8.9+ and SP23+Oracle JD Edwards World (A7.3, A8.1, A9.1, and A9.2)Certifications viewable in Classic CertifyClassic certify is the "old" user interface. Clicking the "Classic Certify" link from Certifications > QuickLinks will take you there.Enterprise PeopleTools Release 8.49 (Coming Soon)Enterprise ManagerOther ResourcesSee the Tips and Tricks for the new Certify.Watch the 4 minute introduction to the new certify.Or how to get the most out of certify with a advanced searching and features demo with the new certify.

    Read the article

  • 12.04 Random sounds sometimes after startup

    - by Timothy Duane
    I'm not quite sure what the sound is, but it has only started happening in the last two days. The only way I can describe it would be that it sounds like a wooden xylophone being tapped lightly. It will play at random intervals ranging between 1 second to a couple minutes for up to a half hour (from what I have noticed) after startup. If anybody has any ideas as to what is causing this or how to fix it I would appreciate it. System information: Computer: Toshiba Satellite L655-S5160 Audio Driver: Conexant Audio Driver(v4.119.0.60; 07-14-2010; 41.89M)

    Read the article

  • Does an onboard video affect the X windows configuration?

    - by Timothy
    Does the onboard video on the motherboard affect the X windows configuration? My system has onboard and pcie video. The onboard video is a NVIDIA GeForce 7025 GPU, On Board Graphic Max. Memory Share Up to 512MB(Under OS By Turbo Cache). I have a pcie dual head video card installed with two monitors. The video card is a GeForce 8400 GS, with 512mb memory. When installing Ubuntu 12.04, only one monitor worked. When pulling up system settings- Displays it shows a laptop. This is a desktop pc. I did get both monitors to work using nvidia using twinview -- A complicated process! When checking nvidia now it shows the monitors disabled. The Nvidia X server setting does show the GPU and all the information. I was thinking it's seeing the onboard video on the motherboard. Why else would it show laptop?

    Read the article

  • Sound and Video Skips on Ubuntu 11.10 64 bit

    - by Timothy LaFontaine
    I preformed a dist-upgrade from 11.04 to 11.10 and now I can not listen to any music or sounds (log on sound included), nor watch video without it stopping and then catching up and stopping again (This is flash or .mp4 through VLC). I did not have this issue with 11.04 and have even just preformed a fresh install of my system. I have tried to reinstall Pulse Audio and removing the .pulse folder but to no avail. Any help would be appreciated.

    Read the article

  • Ubuntu One and iPad

    - by Martin G Miller
    I have installed the Ubuntu One app for my iPad 3 running iOS 6. It runs and asked to access my pictures folder on the iPad, which I granted. Now it just displays a splash screen for the Ubuntu One but does not seem to be doing anything else. I can't figure out how to get it to see my regular Ubuntu One account that I have had for the last few years and use regularly between various Ubuntu computers I have.

    Read the article

  • Parent variable inheritance methods Unity3D/C#

    - by Timothy Williams
    I'm creating a system where there is a base "Hero" class and each hero inherits from that with their own stats and abilities. What I'm wondering is, how could I call a variable from one of the child scripts in the parent script (something like maxMP = MP) or call a function in a parent class that is specified in each child class (in the parent update is alarms() in the child classes alarms() is specified to do something.) Is this possible at all? Or not? Thanks.

    Read the article

  • Keeping the camera from going through walls in a first person game in Unity?

    - by Timothy Williams
    I'm using a modified version of the standard Unity First Person Controller. At the moment when I stand near walls, the camera clips through and lets me see through the wall. I know about camera occlusion and have implemented it in 3rd person games, but I have no clue how I'd accomplish this in a first person game, since the camera doesn't move from the player at all. How do other people accomplish this?

    Read the article

  • Ubuntu on a virtual machine

    - by Barry Miller
    I've installed Virtual Box and am trying to install Ubuntu 12.04 from a downloaded ISO. Everything is going fine but I come to a choice that says no operating system is dectected on this machine, what would you like to do? 1)Erase disk and install Ubuntu (this will erase any files on the disk) or 2) Something else (choose partition size, multiple partitions, etc). Does the first option mean erase all files on the VIRTUAL DISK--NOT THE COMPUTER? Is it just talking about the virtual machine or if I select this option will it erase my Windows operating system and other files on my hard drive?

    Read the article

  • Changes in Language Punctuation [closed]

    - by Wes Miller
    More social curiosity than actual programming question... (I got shot for posting this on Stack Overflow. They sent me here. At least i hope here is where they meant.) Based on the few responses I got before the content police ran me off Stack Overflow, I should note that I am legally blind and neatness and consistency in programming are my best friends. A thousand years ago when I took my first programming class (Fortran 66) and a mere 500 years ago when I tokk my first C and C++ classes, there were some pretty standard punctuation practices across languages. I saw them in Basic (shudder), PL/1, PL/AS, Rexx even Pascal. Ok, APL2 is not part of this discussion. Each language has its own peculiar punctuation. Pascal's periods, Fortran's comma separated do loops, almost everybody else's semicolons. As I learned it, each language also has KEYWORDS (if, for, do, while, until, etc.) which are set off by whitespace (or the left margin) if, etc. Each language has function, subroutines of whatever they're called. Some built-in some user coded. They were set off by function_name( parameters );. As in sqrt( x ) or rand( y ); Lately, there seems to be a new set of punctuation rules. Especially in c++ where initializers get glued onto the end of variable declarations int x(0); or auto_ptr p(new gizmo); This usually, briefly fools me into thinking someone is declaring a function prototype or using a function as a integer. Then "if" and 'for' seems to have grown parens; if(true) for(;;), etc. Since when did keywords become functions. I realize some people think they ARE functions with iterators as parameters. But if "for" is a function, where did the arg separating commas go? And finally, functions seem to have shed their parens; sqrt (2) select (...) I know, I koow, loosening whitespace rules is good. Keep reading. Question: when did the old ways disappear and this new way come into vogue? Does anyone besides me find it irritating to read and that the information that the placement of punctuation used to convey is gone? I know full well that K&R put the { at the end of the "if" or "for" to save a byte here and there. Can't use that excuse here. Space as an excuse for loss of readability died as HDD space soared past 100 MiB. Your thoughts are solicited. If there is a good reason to do this, I'll gladly learn it and maybe in another 50 years I'll get used to it. Of course it's good that compilers recognize these (IMHO) typos and keep right on going, but just because you CAN code it that way doesn't mean you HAVE to, right?

    Read the article

  • Animate from end frame of one animation to end frame of another Unity3d/C#

    - by Timothy Williams
    So I have two legacy FBX animations; Animation A and Animation B. What I'm looking to do is to be able to fade from A to B regardless of the current frame A is on. Using animation.CrossFade() will play A in reverse until it reaches frame 0, then play B forward. What I'm looking to do is blend from the current frame of A to the end frame of B. Probably via some sort of lerp between the facial position in A and the facial position in the last frame of B. Does anyone know how I might be able to accomplish this? Either via a built in function or potentially lerping of some sort?

    Read the article

  • Cannot ping any computers on LAN

    - by Timothy
    I havem't been able to find a straight forward answer on this yet. I'm hoping people here are able to help! Keep in mind that I'm a complete beginner at this - this is the first installation i've done for any LINUX systems ever so please keep that in mind when answering this question. We are a complete Windows shop, using nothing but Microsoft products but looking into the value of OpenStalk however have been having problems getting Ubuntu Server installed and speaking to the network. The machine is getting an IP address which is telling me that some sort of DHCP activity is working but I'm not able to ping any computer on our network as well as not able to connect to the internet. Every time I try to ping i'm getting; Destination Host Unreachable I've tried using modifying the resolv.conf file with our static details to match my Windows 7 machine still with no luck. Even tried disabling the firewall on Ubuntu Server 11 and no luck. Any ideas? Please let me know if there is any information you need from the server and I'll post up.

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Missing return statement when using .charAt [migrated]

    - by Timothy Butters
    I need to write a code that returns the number of vowels in a word, I keep getting an error in my code asking for a missing return statement. Any solutions please? :3 import java.util.*; public class vowels { public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("Please type your name."); String name = input.nextLine(); System.out.println("Congratulations, your name has "+ countVowels(name) +" vowels."); } public static int countVowels(String str) { int count = 0; for (int i=0; i < str.length(); i++) { // char c = str.charAt(i); if (str.charAt(i) == 'a' || str.charAt(i) == 'e' || str.charAt(i) == 'o' || str.charAt(i) == 'i' || str.charAt(i) == 'u') count = count + 1; } } }

    Read the article

  • Hide folder names or such?

    - by Miller
    Okay, I have a cpanel account with unmetered everything (pay a bit per month), so I wanna host my forum on it etc I have the domains as lets say money.com wordpress.com forum.com As I'll have to put everything in different folders for instance money will be in /m/ wordpress /w/ and forum in /forum/ or something. What I'm saying is, how do I hide the file so it'll look like money.com/m/ is actually money.com ?? I need to hide the folder name the contents are in so I can host multiple sites, therefore the site will look like its the only site on the host so I don't have to add a redirect for it to direct it to the folder? Thanks guys, been trying for a while!

    Read the article

  • How to educate business managers on the complexity of adding new features? [duplicate]

    - by Derrick Miller
    This question already has an answer here: How to educate business managers on the complexity of adding new features? [duplicate] 3 answers We maintain a web application for a client who demands that new features be added at a breakneck pace. We've done our best to keep up with their demands, and as a result the code base has grown exponentially. There are now so many modules, subsystems, controllers, class libraries, unit tests, APIs, etc. that it's starting to take more time to work through all of the complexity each time we add a new feature. We've also had to pull additional people in on the project to take over things like QA and staging, so the lead developers can focus on developing. Unfortunately, the client is becoming angry that the cost for each new feature is going up. They seem to expect that we can add new features ad infinitum and the cost of each feature will remain linear. I have repeatedly tried to explain to them that it doesn't work that way - that the code base expands in a fractal manner as all these features are added. I've explained that the best way to keep the cost down is to be judicious about which new features are really needed. But, they either don't understand, or they think I'm bullshitting them. They just sort of roll their eyes and get angry. They're all completely non-technical, and have no idea what does into writing software. Is there a way that I can explain this using business language, that might help them understand better? Are there any visualizations out there, that illustrate the growth of a code base over time? Any other suggestions on dealing with this client?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >