Search Results

Search found 7294 results on 292 pages for 'out parameters'.

Page 163/292 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • 64kb limit on the size of MSMQ Multicast Messages

    - by John Breakwell
    When Windows 2003 came out, Microsoft introduced the ability to broadcast messages to any machines that were listening back. All you had to do was send out a message on a particular port and IP address and any client that had set up a Multicast queue with matching port and IP address would get a copy. Since its introduction, there have been a couple of security vulnerabilities that needed to be removed: Microsoft Security Bulletin MS06-052 Vulnerability in Pragmatic General Multicast (PGM) Could Allow Remote Code Execution (919007) Microsoft Security Bulletin MS08-036 Vulnerabilities in Pragmatic General Multicast (PGM) could allow denial of service (950762) The second of these, MS08-036, was resolved through an undocumented change in functionality. Basically, a limit of 64kb was put on the maximum size of a message that could be broadcast using the Multicast method. Obviously this has caused a few problems for any existing MSMQ Multicast applications that expected to be able to send larger messages. A hotfix has been developed to resolve this problem. 961605 FIX: Multicast messages larger than 64 kilobytes (KB) are not delivered as expected by using Message Queuing 3.0 after security update MS08-036 is installed A registry change is required: Open the registry with Regedit Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RMCAST\Parameters\ Create a DWord called MaxpacketSize Set the value to the desired number of bytes. You can set it to a value between zero and 4MB. If you specify anything above 4MB, it will default to 64K. A reboot is needed after adding this value.

    Read the article

  • How can I draw crisp per-pixel images with OpenGL ES on Android?

    - by Qasim
    I have made many Android applications and games in Java before, however I am very new to OpenGL ES. Using guides online, I have made simple things in OpenGL ES, including a simple triangle and a cube. I would like to make a 2D game with OpenGL ES, but what I've been doing isn't working quite so well, as the images I draw aren't to scale, and no matter what guide I use, the image is always choppy and not the right size (I'm debugging on my Nexus S). How can I draw crisp, HD images to the screen with GL ES? Here is an example of what happens when I try to do it: And the actual image: Here is how my texture is created: //get id int id = -1; gl.glGenTextures(1, texture, 0); id = texture[0]; //get bitmap Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.ball); //parameters gl.glBindTexture(GL10.GL_TEXTURE_2D, id); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_REPLACE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); //crop image mCropWorkspace[0] = 0; mCropWorkspace[1] = height; mCropWorkspace[2] = width; mCropWorkspace[3] = -height; ((GL11) gl).glTexParameteriv(GL10.GL_TEXTURE_2D, GL11Ext.GL_TEXTURE_CROP_RECT_OES, mCropWorkspace, 0);

    Read the article

  • Unable to debug an encodded javascript?

    - by miles away
    I’m having some problems debugging an encoded javacscript. This script I’m referring to given in this link over here. The encoding here is simple and it works by shifting the unicodes values to whatever Codekey was use during encoding. The code that does the decoding is given here in plain English below:- <script language="javascript"> function dF(s){ var s1=unescape(s.substr(0,s.length-1)); var t=''; for(i=0;i<s1.length;i++)t+=String.fromCharCode(s1.charCodeAt(i)-s.substr(s.length-1,1)); document.write(unescape(t)); } </script> I’m interested in knowing or understanding the values (e.g s1,t). Like for example when the value of i=0 what values would the following attributes / method would hold s1.charCodeAt(i) and s.substr(s.length-1,1) The reason I’m doing this is to understand as to how a CodeKey function really works. I don’t see anything in the code above which tells it to decode on the basis of codekey value. The only thing I can point in the encoding text is the last character which is set to 1 , 2 ,3 or 4 depending upon the codekey selected during encoding process. One can verify using the link I have given above. However, to debug, I’m using firebug addon with the script running as localhost on my wamp server. I’m able to put a breakpoint on the js using firebug but I’m unable to retrieve any of the user defined parameters or functions I mentioned above. I want to know under this context what would be best way to debug this encoded js.

    Read the article

  • Dreaded SQLs

    - by lavanyadeepak
    Dreaded SQLs We used to think that a SQL statement without a where clause is only dangerous right since running that on a server TSQL is just going to impact the entire table like waving the magic wand. For that reason we should cultivate the habit first to write the statement as select and then to modify the select portion as update. Within the T-SQL Window, I would normally prefer the following first: select * from employee where empid in (4,5) and then once I am satisfied with the results, I would go ahead with the following change: --select * delete from employee where empid in (4,5) Today I just discovered another coding horror. This would typically be applicable in a stored procedure and with respect to variable nomenclature. It is always desirable to have a suitable nomenclature for parameters distinct from the column names and internal variables. This would help quicker debugging of the stored procedures besides enhancing the readability. Else in a quick bout of enthusiasm a statement like   if (@CustomerID = @CustomerID) [when the latter is intended to denote the column name and there is a superflous @ prepended], zeroing in on the problem would be little tricky. Had there been a still powerful nomenclature rules then debugging would have been more straight-forward and simpler right?

    Read the article

  • Efficient path-finding in free space

    - by DeadMG
    I've got a game situated in space, and I'd like to issue movement orders, which requires pathfinding. Now, it's my understanding that A* and such mostly apply to trees, and not empty space which does not have pathfinding nodes. I have some obstacles, which are currently expressed as fixed AABBs- that is, there is no unbounded "terrain" obstacle. In addition, I expect most obstacles to be reasonably approximable as cubes or spheres. So I've been thinking of applying a much simpler pathfinding algorithm- that is, simply cast a ray from the current position to the target position, and then I can get a list of obstacles using spatial partitioning relatively quickly. What I'm not so sure about is how to determine the part where the ordered unit manoeuvres around the obstacles. What I've been thinking so far is that I will simply use potential fields- that is, all units will feel a strong repulsive force away from each other and a moderate force towards the desired point. This also has the advantage that to issue group orders, I can simply order a mid-level force towards another entity. But this obviously won't achieve the optimal solution. Will potential fields achieve a reasonable approximation given my parameters, or do I need another solution?

    Read the article

  • Box2D `ApplyLinearImpulse` is not working whereas `SetLinearVelocity` works

    - by Narek
    I need to mimic jumping behavior for the player in my game. Player consists of two fixtures with circle and rectangle shapes. Rectangle I use to detect ground and it is a sensor. Is some point for jumping I do this: float impulseY = body->GetMass() * PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180); body->ApplyLinearImpulse(b2Vec2(0, impulseY), body->GetWorldCenter(), true); and player does not jump. But when I do this: body->SetLinearVelocity(b2Vec2(0, PLAYER_JUMPING_VEOCITY / PTM_RATIO * std::sin(PLAYER_JUMPING_ANGLE * PI / 180))); my player jumps. Also when I change the rectangle shape to be normal (not sensor) shape, its works again. Why? Just in case here are the parameters of my rectangular sensor: b2PolygonShape boxShape; boxShape.SetAsBox(width * 0.5/2/PTM_RATIO, height * 0.2/2/PTM_RATIO, b2Vec2(0, -height * 0.4 /PTM_RATIO), 0); b2FixtureDef boxFixtureDef; boxFixtureDef.friction = 0; boxFixtureDef.restitution = 0; boxFixtureDef.density = 1; boxFixtureDef.isSensor = true; boxFixtureDef.userData = static_cast<void*>(PLAYER_GROUP);

    Read the article

  • preseeded installation keeps asking for confirmation while creating RAID-Partitions on certain hardware-platform

    - by Marc Shennon
    I am aware of the partman-md/confirm_nooverwrite thing, that was the solution to most of this problems in the past. The thing is, that the preseed-file works for almost all hardware-platforms I tested, but only for one (Primergy MX130) it keeps asking for confirmation, before writing the partition-layout to the disks. All machines I tested are running with two SATA Disks, nothing special. I'm not really sure, what information could be needed in order to investigate the cause of this behaviour, but I would of course be willing to provide more information, if someone has an idea. Relevant part of the preseed file is the following: d-i partman-auto/disk string /dev/sda /dev/sdb d-i partman-auto/method string raid d-i partman-md/confirm boolean true d-i partman-partitioning/confirm_write_new_label boolean true d-i partman-md/device_remove_md boolean true d-i partman/choose_partition select finish d-i partman-md/confirm_nooverwrite boolean true # Write the changes to disks? d-i partman/confirm boolean true d-i mdadm/boot_degraded boolean true # RECIPE # Next you need to specify the physical partitions that will be used. d-i partman-auto/expert_recipe string \ multiraid :: \ 500 10000 1000000000 raid $lvmignore{ }\ $primary{ } \ method{ raid } \ . \ 512 1000 786 raid $lvmignore{ }\ $primary{ } \ method{ raid } \ . \ 8192 10240 10240 raid $lvmignore{ }\ method{ raid } \ . # Parameters are: # <raidtype> <devcount> <sparecount> <fstype> <mountpoint> <devices> <sparedevices> d-i partman-auto-raid/recipe string \ 1 2 0 ext4 / /dev/sda1#/dev/sdb1 . \ 1 2 0 ext2 /boot /dev/sda2#/dev/sdb2 . \ 1 2 0 swap - /dev/sda5#/dev/sdb5 .

    Read the article

  • Looking for a better Factory pattern (Java)

    - by Sam Goldberg
    After doing a rough sketch of a high level object model, I am doing iterative TDD, and letting the other objects emerge as a refactoring of the code (as it increases in complexity). (That whole approach may be a discussion/argument for another day.) In any case, I am at the point where I am looking to refactor code blocks currently in an if-else blocks into separate objects. This is because there is another another value combination which creates new set of logical sub-branches. To be more specific, this is a trading system feature, where buy orders have different behavior than sell orders. Responses to the orders have a numeric indicator field which describes some event that occurred (e.g. fill, cancel). The combination of this numeric indicator field plus whether it is a buy or sell, require different processing buy the code. Creating a family of objects to separate the code for the unique handling each of the combinations of the 2 fields seems like a good choice at this point. The way I would normally do this, is to create some Factory object which when called with the 2 relevant parameters (indicator, buysell), would return the correct subclass of the object. Some times I do this pattern with a map, which allows to look up a live instance (or constructor to use via reflection), and sometimes I just hard code the cases in the Factory class. So - for some reason this feels like not good design (e.g. one object which knows all the subclasses of an interface or parent object), and a bit clumsy. Is there a better pattern for solving this kind of problem? And if this factory method approach makes sense, can anyone suggest a nicer design?

    Read the article

  • Want Google to index redirect urls

    - by Dave Goten
    I'm having issues with users who think that Google Search is the address bar. Some of the sites that link to my site use user friendly addresses with 301 redirects to pages that have less friendly URLs. So, for example if I enter www.foo.com/bar it goes to www.bar.com/page.php?some-parameters-and-utm-codes-etc usually this is done by a 301 redirect in order to keep the SEO from foo.com on bar.com and so on, which I believe is standard practice. However, lately there have been more and more people searching www.foo.com/bar instead of going to www.foo.com/bar directly and because the page /bar is nothing more than a redirect it has no SEO that I know of. Things I've thought of but haven't been able to test, because Google takes forever to update :) (and I'm lazy like that), include using Google sitemaps and having them enter their redirects as entries there. (I could see this working if they were the top search entry all the time, and it might appear as a sitelink, but I don't know if that'll make the url itself show up in searches) Using Canonical tags on my pages to the redirects they set up. Which is a nightmare in itself because of the nature of my pages. One week the www.foo.com/bar might go to www.bar.com/pageA.php the next it might goto www.bar.com/pageB.php and having to remember to take the canonical tag off of pageA, so that it doesn't get confused with pageB would be a pain. Using 302 redirects -.- So I guess the question here is, does anyone have any experience or knowledge about this? What should I do to make www.foo.com/bar show up when someone 'searches' for this redirect url?

    Read the article

  • Questioning one of the arguments for dependency injection: Why is creating an object graph hard?

    - by oberlies
    Dependency injection frameworks like Google Guice give the following motivation for their usage (source): To construct an object, you first build its dependencies. But to build each dependency, you need its dependencies, and so on. So when you build an object, you really need to build an object graph. Building object graphs by hand is labour intensive (...) and makes testing difficult. But I don't buy this argument: Even without dependency injection, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way: class BillingService { private final CreditCardProcessor processor; private final TransactionLog transactionLog; // constructor for tests, taking all collaborators as parameters BillingService(CreditCardProcessor processor, TransactionLog transactionLog) { this.processor = processor; this.transactionLog = transactionLog; } // constructor for production, calling the (productive) constructors of the collaborators public BillingService() { this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog()); } public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard) { ... } } So there may be other arguments for dependency injection (which are out of scope for this question!), but easy creation of testable object graphs is not one of them, is it?

    Read the article

  • Design Pattern for Skipping Steps in a Wizard

    - by Eric J.
    I'm designing a flexible Wizard system that presents a number of screens to complete a task. Some screens may need to be skipped based on answers to prompts on one or more previous screens. The conditions to skip a given screen need to be editable by a non-technical user via a UI. Multiple conditions need only be combined with and. I have an initial design in mind, but it feels inelegant. I wonder if there's a better way to approach this class of problem. Initial Design UI where The first column allows the user to select a question from a previous screen. The second column allows the user to select an operator applicable to the type of question asked. The third column allows the user to enter one or more values depending on the selected operator. Object Model public enum Operations { ... } public class Condition { int QuestionId { get; set; } Operations Operation { get; set; } List<object> Parameters { get; private set; } } List<Condition> pageSkipConditions; Controller Logic bool allConditionsTrue = pageSkipConditions.Count > 0; foreach (Condition c in pageSkipConditions) { allConditionsTrue &= Evaluate(previousAnswers, c); } // ... private bool Evaluate(List<Answers> previousAnswers, Condition c) { switch (c.Operation) { case Operations.StartsWith: // logic for this operation // etc. } }

    Read the article

  • Does Google Analytics exclude Campaign traffic from Facebook in the Social reports?

    - by user1612223
    For a while we have used campaign tags when putting posts on Facebook so that we can run campaign reports in Google analytics on those links. However it appears that traffic from those links are being excluded in Google's Social reports. For example between 7/20 and 8/19 I'm seeing 123 Visits where Facebook is the source in my Campaigns report, but only 29 Visits where Facebook is the source in my Social Sources report. Main questions: Does Google exclude campaign traffic from it's social reports? If it does, is there any way to reconcile that so that the traffic shows up in both reports? If it doesn't, what could be causing the vast discrepancy? One observer noted that we are setting the Medium to "Post" when passing the campaign parameters, and that Google may only allow "Referral" traffic in it's social reports (Just speculation). In that case we could potentially change the Medium to "Referral", but that would undermine some of our strategy in being able to set different mediums. I have also considered that maybe the campaign traffic came to the site several times, and the social report may count the same user as less visits, however over 70% of the Facebook campaign traffic is new traffic, so at a minimum there would need to be over 85 Visits on the Social side for that argument to be valid. I've done several searches for any information on this topic, and haven't run across much of anything. I did post the same question on Google's Product Forum and have not gotten a response. The title of that question was 'Facebook Campaign Traffic Not Showing in Social Reports'. The inability to pass campaign data on Facebook posts would make evaluating the performance of those specific posts very difficult, so I'm hoping there is a solution to this.

    Read the article

  • How To Export/Import a Website in IIS 7.x

    - by Tray Harrison
    IIS 6 had a great feature called ‘Save Configuration to a File’ which would allow you to easily export a website’s configuration, to be later used to import either on the same server or another box.  This came in handy anytime you wanted to duplicate a site in order to do some testing without impacting the existing application.  So naturally, Microsoft decided to do away with this feature in IIS 7. The process to export/import a site is still fairly simple, though not as obvious as it was in previous versions.  Here are the steps: 1. Open a command prompt and navigate to C:\Windows\System32\inetsrv and run the following command: appcmd list site /name:<sitename> /config /xml > C:\output.xml So if you were wanting to export a website named EAC, you would run the following: If you’ll be setting up another copy of the site on the same server, you’ll now need to edit the output.xml file before importing it.  This is necessary in order to avoid conflicts such as bindings, Site ID, etc.  To do this, edit the XML and change the values.  Go ahead and make a copy of the home directory, and rename it to whatever folder name you specified in the output – /EAC2 in this example.  If you decide to change the app pool, make sure you go ahead and create the new app pool as well. Once these edits have been made, we are now ready to import the site.  To do that run: appcmd add sites /in < c:\output.xml So for our example it would look like this: That’s it.  You should now see your site listed when opening up Inet Manager.  If for some reason the site fails to start, that’s probably because you forgot to create the new app pool or there is a problem with one of the other parameters you changed.  Look at the System log to identify any issues like this.

    Read the article

  • How can I design my classes to include calendar events stored in a database?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCellList() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many calendar events (such as appointments or schedules) stored in a database. My question is: which is the best way to manage these calendar events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • Modular Architecture for Processing Pipeline

    - by anjruu
    I am trying to design the architecture of a system that I will be implementing in C++, and I was wondering if people could think of a good approach, or critique the approach that I have designed so far. First of all, the general problem is an image processing pipeline. It contains several stages, and the goal is to design a highly modular solution, so that any of the stages can be easily swapped out and replaced with a piece of custom code (so that the user can have a speed increase if s/he knows that a certain stage is constrained in a certain way in his or her problem). The current thinking is something like this: struct output; /*Contains the output values from the pipeline.*/ class input_routines{ public: virtual foo stage1(...){...} virtual bar stage2(...){...} virtual qux stage3(...){...} ... } output pipeline(input_routines stages); This would allow people to subclass input_routines and override whichever stage they wanted. That said, I've worked in systems like this before, and I find the subclassing and the default stuff tends to get messy, and can be difficult to use, so I'm not giddy about writing one myself. I was also thinking about a more STLish approach, where the different stages (there are 6 or 7) would be defaulted template parameters. Can anyone offer a critique of the pattern above, thoughts on the template approach, or any other architecture that comes to mind?

    Read the article

  • Do all mods simply alter game files? [on hold]

    - by Starkers
    When you install some mods you drag certain files into your game directory and replace the files. Other mods, though, come with an installer where you can set parameters first. Does the installer then go and automatically replace the certain files? At the end of the day, is that all the installation of any mod is? Is the installation of a mod simply the replacement of certain files inside the game's root directory? Do mods exist which don't fit the above statement? That install outside the game's root? Why do they do this? All the mods I can think of do just replace certain files inside the game's root. However, I know Team Fortress was spawned from a multiplayer halflife 1 mod. Do you reckon that mod installed files outside the root to enable multiplayer via a network for a single player game? How rare are these mods? Or do they not even exist? Do even extensive mods make all their changes inside the root?

    Read the article

  • What is a 'good number' of exceptions to implement for my library?

    - by Fuzz
    I've always wondered how many different exception classes I should implement and throw for various pieces of my software. My particular development is usually C++/C#/Java related, but I believe this is a question for all languages. I want to understand what is a good number of different exceptions to throw, and what the developer community expect of a good library. The trade-offs I see include: More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code More exception classes can mean more code maintenance More exception classes can mean the API is less approachable to users The scenarios I wish to understand exception usage in include: During 'configuration' stage, which might include loading files or setting parameters During an 'operation' type phase where the library might be running tasks and doing some work, perhaps in another thread Other patterns of error reporting without using exceptions, or less exceptions (as a comparison) might include: Less exceptions, but embedding an error code that can be used as a lookup Returning error codes and flags directly from functions (sometimes not possible from threads) Implemented an event or callback system upon error (avoids stack unwinding) As developers, what do you prefer to see? If there are MANY exceptions, do you bother error handling them separately anyway? Do you have a preference for error handling types depending on the stage of operation?

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • Generating HTML Help files based on XML documentation

    - by geekrutherford
    Since discovering the XML commenting features built into .NET years ago I have been using it to help make my code more readable and simpler for other developers to understand exactly what the code is doing. Entering /// preceding a line of code causes Visual Studio to insert "summary" tags.  It also results in additional tags being generated if you are commenting a method with parameters and a return type. I already knew that Intellisense would pick up these comments and display them when coding and selecting properties, methods, etc. from a class.  I also knew that you could set Visual Studio to generate an XML file containing said comments.  Only recently did I begin to wonder if I could generate some kind of readable help files based on these comments I so diligently added. After searching the web I came across NDoc, an open source project which creates documentation for you based on the XML files generated by Visual Studio.  Unfortunately, NDoc has become stale and no longer supported (last release was back in 2005). Fortunately there is a little known tool from Microsoft themselves called "Sandcastle Help File Builder".  This nifty little tool gives you a graphical interface that allows you to specify multiple DLL and XML files from which to generate a MSDN like HTML Help File for your own projects! You can check it out here: http://shfb.codeplex.com/ If you are curious how to set Visual Studio to generate the above reference XML documentation files simply go to your projects property page and edit as shown below (my paths are specific, you can leave yours at the default values):

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • Ubuntu 12.04 Fails to Boot after Reboot

    - by Joe
    I have installed Ubuntu 12.04 on several Dell c6220 servers. The install was successful and all hardware is recognized. The problem that I am running into is that when issuing the reboot command or when pressing ctrl-alt-del, the server shuts down, but never comes back up. Instead, the fan revs up to full speed and stays that way until I power the server down. Once the server has been powered down via the power button, Ubuntu will boot just fine -- until the next reboot. I have found that by rebooting the server via the DRAC web interface will reboot the server correctly. I have also found that this problem does not exist with CentOS -- I can press ctrl-alt-del all day long and it always comes back up. I've tried several kernel parameters such as: reboot=bios reboot=pci reboot=acpi reboot=cold acpi=off noapic Nothing seems to work. I have also tried upgrading to kernel 3.4, but no change there, either. Has anyone run into a similar problem or any pointers on troubleshooting? Thanks, Joe

    Read the article

  • Particle trajectory smoothing: where to do the simulation?

    - by nkint
    I have a particle system in which I have particles that are moving to a target and the new targets are received via network. The list of new target are some noisy coordinates of a moving target stored in the server that I want to smooth in the client. For doing the smoothing and the particle I wrote a simple particle engine with standard euler integration model. So, my pseudo code is something like that: # pseudo code class Particle: def update(): # do euler motion model integration: # if the distance to the target is more than a limit # add a new force to the accelleration # seeking the target, # and add the accelleration to velocity # and velocity to the position positionHistory.push_back(position); if history.length > historySize : history.pop_front() class ParticleEngine: particleById = dict() # an associative array # where the keys are the id # and particle istances are sotred as values # this method is called each time a new tcp packet is received and parsed def setNetTarget(int id, Vec2D new_target): particleById[id].setNewTarget(new_target) # this method is called each new frame def draw(): for p in particleById.values: p.update() beginVertex(LINE_STRIP) for v in p.positionHistory: vertex(v.x, v.y) endVertex() The new target that are arriving are noisy but setting some accelleration/velocity parameters let the particle to have a smoothed trajectories. But if a particle trajectory is a circle after a while the particle position converge to the center (a normal behaviour of euler integration model). So I decided to change the simulation and use some other interpolation (spline?) or smooth method (kalman filter?) between the targets. Something like: switch( INTERPOLATION_MODEL ): case EULER_MOTION: ... case HERMITE_INTERPOLATION: ... case SPLINE_INTERPOLATION: ... case KALMAN_FILTER_SMOOTHING: ... Now my question: where to write the motion simulation / trajectory interpolation? In the Particle? So I will have some Particle subclass like ParticleEuler, ParticleSpline, ParticleKalman, etc..? Or in the particle engine?

    Read the article

  • How to indicate to a web server the language of a resource

    - by Nik M
    I'm writing an HTTP API to a publishing server, and I want resources with representations in multiple languages. A user whose client GETs a resource which has Korean, Japanese and Trad. Chinese representations, and sends Accept-Language: en, ja;q=0.7 should get the Japanese. One resource, identified by one URI, will therefore have a number of different language representations. This seems to me like a totally orthodox use of content negotiation and multiple resource representations. But when each translator comes to provide these alternate language representations to the server, what's the correct way to instruct the server which language to store the representation under? I'm having the translators PUT the representation in its entirety to the same URI, but I can't find out how to do this elegantly. Content-Language is a response header, and none of the request headers seem to fit the bill. It seems my options are Invent a new request header Supply additional metadata in a multipart/related document Provide language as a parameter to the Content-Type of the request, like Content-Type: text/html;language=en I don't want to get into the business of extending HTTP, and I don't feel great about bundling extra metadata into the representation. Neither approach seems friendly to HTTP caches either. So option 3 seems like the best way that I can think of, but even then it's decidedly non-standard to put my own specific parameters on a very well established content type. Is there any by-the-book way of achieving this?

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >