Search Results

Search found 8692 results on 348 pages for 'patterns practices'.

Page 97/348 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Abstract Design Pattern implementation

    - by Pathachiever11
    I started learning design patterns a while ago (only covered facade and abstract so far, but am enjoying it). I'm looking to apply the Abstract pattern to a problem I have. The problem is: Supporting various Database systems using one abstract class and a set of methods and properties, which then the underlying concrete classes (inheriting from abstract class) would be implementing. I have created a DatabaseWrapper abstract class and have create SqlClientData and MSAccessData concrete class that inherit from the DatabaseWrapper. However, I'm still a bit confused about how the pattern goes as far as implementing these classes on the Client. Would I do the following?: DatabaseWrapper sqlClient = new SqlClientData(connectionString); This is what I saw in an example, but that is not what I'm looking for because I want to encapsulate the concrete classes; I only want the Client to use the abstract class. This is so I can support for more database systems in the future with minimal changes to the Client, and creating a new concrete class for the implementations. I'm still learning, so there might be a lot of things wrong here. Please tell me how I can encapsulate all the concrete classes, and if there is anything wrong with my approach. Many Thanks! PS: I'm very excited to get into software architecture, but still am a beginner, so take it easy on me. :)

    Read the article

  • How to get java singleton object manager to return any type of object?

    - by Robert
    I'm writing an interactive fiction game in java from scratch. I'm currently storing all of my game object references in a hashmap in a singleton called ObjectManager. ObjectManager has a function called get which takes an integer ID and returns the appropriate reference. The problem is that it returns a BaseObject when I need to return subclasses of BaseObject with more functionality. So, what I've done so far is I've added a getEntity function which returns BaseEntity (which is a subclass of BaseObject). However, when I need the function to return to an object that is a subclass of BaseEntity that has added, required functionality, I will need to make another function. I know there is a better way, but I don't know what it is. I know very little of design patterns, and I'm not sure which one to use here. I tried passing 'class' as a parameter, but that didn't get me anywhere. public BaseObject get(int ID){ return (BaseObject)refMap.get(ID); } public BaseEntity getEntity(int ID){ return (BaseEntity)refMap.get(ID); } Thanks, java ninjas!

    Read the article

  • New Book! SQL Server 2012 Integration Services Design Patterns!

    - by andyleonard
    SQL Server 2012 Integration Services Design Patterns has been released! The book is done and available thanks to the hard work and dedication of a great crew: Michelle Ufford ( Blog | @sqlfool ) – co-author Jessica M. Moss ( Blog | @jessicammoss ) – co-author Tim Mitchell ( Blog | @tim_mitchell ) – co-author Matt Masson ( Blog | @mattmasson ) – co-author Donald Farmer ( Blog | @donalddotfarmer ) – foreword David Stein ( Blog | @made2mentor ) – technical editing Mark Powers – editing Jonathan Gennick...(read more)

    Read the article

  • Is it possible (and practical) to search a string for arbitrary-length repeating patterns?

    - by blz
    I've recently developed a huge interest in cryptography, and I'm exploring some of the weaknesses of ECB-mode block ciphers. A common attack scenario involves encrypted cookies, whose fields can be represented as (relatively) short hex strings. Up until now, I've relied on my eyes to pick out repeating blocks, but this is rather tedious. I'm wondering what kind of algorithms (if any) could help me automate my search for repeating patterns within a string. Can anybody point me in the right direction?

    Read the article

  • What is best practice as far as using perl-isms (idiomatic expressions) in Perl?

    - by DVK
    A couple of years back I participated in writing the best practices/coding style for our (fairly large and often Perl-using) company. It was done by a committee of "senior" Perl developers. As anything done by consensus, it had parts which everyone disagreed with. Duh. The part that rubbed wrong the most was a strong recommendation to NOT use many Perlisms (loosely defined as code idioms not present in, say C++ or Java), such as "Avoid using '... unless X;' constructs". The main rationale posited for such rules as this one was that non-Perl developers would have much harder time with the Perl code base otherwise. The assumption here I guess is that Perl code jockeys are rarer breed overall - and among new hires to the company - than non-Perlers. I was wondering whether SO has any good arguments to support or reject this logic... it is mostly academic curiosity at this point as the company's Perl coding standard is ossified and will never be revised again as far as I'm aware. P.S. Just to be clear, the question is in the context I noted - the answer for an all-Perl smaller development shop is obviously a resounding "use Perl to its maximum capability".

    Read the article

  • How to populate a generic list of objects in C# from SQL database

    - by developr
    I am just learning ASP.NET c# and trying to incorporate best practices into my applications. Everything that I read says to layer my applications into DAL, BLL, UI, etc based on separation of concerns. Instead of passing datatables around, I am thinking about using custom objects so that I am loosely coupled to my data layer and can take advantage of intellisense in VS. I assume these objects would be considered DTOs? First, where do these objects reside in my layers? BLL, DAL, other? Second, when populating from SQL, should I loop through a data reader to populate the list or first fill a data table, then loop through the table to populate the list? I know you should close the database connection as soon as possible, but it seems like even more overhead to populate the data table and then loop through that for the list. Third, everything I see these days says use Linq2SQL. I am planning to learn Linq2SQL, but at this time I am working with a legacy database that doesn't have foreign keys setup and I do not have the ability to fix it atm. Also, I want to learn more about c# before I start getting into ORM solutions like nHibernate. At the same time I don't want to type out all the connection and SQL plumbing for every query. Is it ok to use the Enterprise DAAB for now?

    Read the article

  • Logic inside an enum

    - by Vivin Paliath
    My colleagues and I were having a discussion regarding logic in enums. My personal preference is to not have any sort of logic in Java enums (although Java provides the ability to do that). The discussion in this cased centered around having a convenience method inside the enum that returned a map: public enum PackageTypes { Letter("01", "Letter"), .. .. Tube("02", "Packaging Tube"); private String packageCode; private String packageDescription; .. .. public static Map<String, String> toMap() { Map<String, String> map = new LinkedHashMap<String, String>(); for(PackageType packageType : PackageType.values()) { map.put(packageType.getPackageCode(), packageType.getPackageDescription()); } return map; } } My personal preference is to pull this out into a service. The argument for having the method inside the enum centered around convenience. The idea was that you don't have to go to a service to get it, but can query the enum directly. My argument centered around separation of concern and abstracting any kind of logic out to a service. I didn't think "convenience" was a strong argument to put this method inside an enum. From a best-practices perspective, which one is better? Or does it simply come down to a matter of personal preference and code style?

    Read the article

  • Help improving a simple assembly function

    - by MPelletier
    I just handed in this function in an assignment. It is done (hence no homework tag). But I would like to see how this can be improved. Essentially, the function sums the squares of all the integers between 1 and the given number, using the following formula: n(n+1)(2n+1)/6 Where n is the maximum number. The function below is made to catch any overflow and return 0 should any occur. UInt32 sumSquares(const UInt32 number) { int result = 0; __asm { mov eax, number //move number in eax mov edx, 2 //move 2 in edx mul edx //multiply (2n) jo end //jump to end if overflow add eax, 1 //addition (2n+1) jo end //jump to end if overflow mov ecx, eax //move (2n+1) in ecx mov ebx, number //move number in ebx add ebx, 1 //addition (n+1) jo end //jump to end if overflow mov eax, number //move number in eax for multiplication mul ebx //multiply n(n+1) jo end //jump to end if overflow mul ecx //multiply n(n+1)(2n+1) jo end //jump to end if overflow mov ebx, 6 //move 6 in ebx div ebx //divide by 6, the result will be in eax mov result, eax //move eax in result end: } return result; } Basically, I want to know what I can improve in there. In terms of best-practices mostly. One thing sounds obvious: smarter overflow check (with a single check for whatever maximum input would cause an overflow).

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • Proper way to implement IXmlSerializable?

    - by Greg
    Once a programmer decides to implement IXmlSerializable, what are the rules and best practices for implementing it? I've heard that GetSchema() should return null and ReadXml should move to the next element before returning. Are these true? And what about WriteXml: should it write a root element for the object or is it assumed that the root is already written? How should child objects be treated and written. Here's a sample of what I have now. I'll update it as I get good responses. public class Calendar: IEnumerable<Gvent>, IXmlSerializable { public XmlSchema GetSchema() { return null; } public void ReadXml(XmlReader reader) { if (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Calendar") { _Name = reader["Name"]; _Enabled = Boolean.Parse(reader["Enabled"]); _Color = Color.FromArgb(Int32.Parse(reader["Color"])); if (reader.ReadToDescendant("Event")) { while (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Event") { var evt = new Event(); evt.ReadXml(reader); _Events.Add(evt); } } reader.Read(); } } public void WriteXml(XmlWriter writer) { writer.WriteAttributeString("Name", _Name); writer.WriteAttributeString("Enabled", _Enabled.ToString()); writer.WriteAttributeString("Color", _Color.ToArgb().ToString()); foreach (var evt in _Events) { writer.WriteStartElement("Event"); evt.WriteXml(writer); writer.WriteEndElement(); } } } public class Event : IXmlSerializable { public XmlSchema GetSchema() { return null; } public void ReadXml(XmlReader reader) { if (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Event") { _Title = reader["Title"]; _Start = DateTime.FromBinary(Int64.Parse(reader["Start"])); _Stop = DateTime.FromBinary(Int64.Parse(reader["Stop"])); reader.Read(); } } public void WriteXml(XmlWriter writer) { writer.WriteAttributeString("Title", _Title); writer.WriteAttributeString("Start", _Start.ToBinary().ToString()); writer.WriteAttributeString("Stop", _Stop.ToBinary().ToString()); } }

    Read the article

  • Nested dereferencing arrows in Perl: to omit or not to omit?

    - by DVK
    In Perl, when you have a nested data structure, it is permissible to omit de-referencing arrows to 2d and more level of nesting. In other words, the following two syntaxes are identical: my $hash_ref = { 1 => [ 11, 12, 13 ], 3 => [31, 32] }; my $elem1 = $hash_ref->{1}->[1]; my $elem2 = $hash_ref->{1}[1]; # exactly the same as above Now, my question is, is there a good reason to choose one style over the other? It seems to be a popular bone of stylistic contention (Just on SO, I accidentally bumped into this and this in the space of 5 minutes). So far, none of the usual suspects says anything definitive: perldoc merely says "you are free to omit the pointer dereferencing arrow". Conway's "Perl Best Practices" says "whenever possible, dereference with arrows", but it appears to only apply to the context of dereferencing the main reference, not optional arrows on 2d level of nested data structures. "MAstering Perl for Bioinfirmatics" author James Tisdall doesn't give very solid preference either: "The sharp-witted reader may have noticed that we seem to be omitting arrow operators between array subscripts. (After all, these are anonymous arrays of anonymous arrays of anonymous arrays, etc., so shouldn't they be written [$array-[$i]-[$j]-[$k]?) Perl allows this; only the arrow operator between the variable name and the first array subscript is required. It make things easier on the eyes and helps avoid carpal tunnel syndrome. On the other hand, you may prefer to keep the dereferencing arrows in place, to make it clear you are dealing with references. Your choice." Personally, i'm on the side of "always put arrows in, since itg's more readable and obvious tiy're dealing with a reference".

    Read the article

  • Correct OOP design without getters?

    - by kane77
    I recently read that getters/setters are evil and I have to say it makes sense, yet when I started learning OOP one of the first things I learned was "Encapsulate your fields" so I learned to create class give it some fields, create getters, setters for them and create constructor where I initialize these fields. And every time some other class needs to manipulate this object (or for instance display it) I pass it the object and it manipulate it using getters/setters. I can see problems with this approach. But how to do it right? For instance displaying/rendering object that is "data" class - let's say Person, that has name and date of birth. Should the class have method for displaying the object where some Renderer would be passed as an argument? Wouldn't that violate principle that class should have only one purpose (in this case store state) so it should not care about presentation of this object. Can you suggest some good resources where best practices in OOP design are presented? I'm planning to start a project in my spare time and I want it to be my learning project in correct OOP design..

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • REST API Best practice: How to accept as input a list of parameter values

    - by whatupwilly
    Hi All, We are launching a new REST API and I wanted some community input on best practices around how we should have input parameters formatted: Right now, our API is very JSON-centric (only returns JSON). The debate of whether we want/need to return XML is a separate issue. As our API output is JSON centric, we have been going down a path where our inputs are a bit JSON centric and I've been thinking that may be convenient for some but weird in general. For example, to get a few product details where multiple products can be pulled at once we currently have: http://our.api.com/Product?id=["101404","7267261"] Should we simplify this as: http://our.api.com/Product?id=101404,7267261 Or is having JSON input handy? More of a pain? We may want to accept both styles but does that flexibility actually cause more confusion and head aches (maintainability, documentation, etc.)? A more complex case is when we want to offer more complex inputs. For example, if we want to allow multiple filters on search: http://our.api.com/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]} We don't necessarily want to put the filter types (e.g. productType and color) as request names like this: http://our.api.com/Search?term=pumas&productType=["Clothing","Bags"]&color=["Black","Red"] Because we wanted to group all filter input together. In the end, does this really matter? It may be likely that there are so many JSON utils out there that the input type just doesn't matter that much. I know our javascript clients making AJAX calls to the API may appreciate the JSON inputs to make their life easier. Thanks, Will

    Read the article

  • Why doesn't every class in the .Net framework have a corresponding interface?

    - by Thorsten Lorenz
    Since I started to develop in a test/behavior driven style, I appreciated the ability to mock out every dependency. Since mocking frameworks like Moq work best when told to mock an interface, I now implement an interface for almost every class I create b/c most likely I will have to mock it out in a test eventually. Well, and programming to an interface is good practice, anyways. At times, my classes take dependencies on .Net classes (e.g. FileSystemWatcher, DispatcherTimer). It would be great in that case to have an interface, so I could depend on an IDispatcherTimer instead, to be able to pass it a mock and simulate its behavior to see if my system under test reacts correctly. Unfortunately both of above mentioned classes do not implement such interfaces, so I have to resort to creating adapters, that do nothing else but inherit from the original class and conform to an interface, that I then can use. Here is such an adapter for the DispatcherTimer and the corresponding interface: using System; using System.Windows.Threading; public interface IDispatcherTimer { #region Events event EventHandler Tick; #endregion #region Properties Dispatcher Dispatcher { get; } TimeSpan Interval { get; set; } bool IsEnabled { get; set; } object Tag { get; set; } #endregion #region Public Methods void Start(); void Stop(); #endregion } /// <summary> /// Adapts the DispatcherTimer class to implement the <see cref="IDispatcherTimer"/> interface. /// </summary> public class DispatcherTimerAdapter : DispatcherTimer, IDispatcherTimer { } Although this is not the end of the world, I wonder, why the .Net developers didn't take the minute to make their classes implement these interfaces from the get go. It puzzles me especially since now there is a big push for good practices from inside Microsoft. Does anyone have any (maybe inside) information why this contradiction exists?

    Read the article

  • Database design for invoices, invoice lines & revisions

    - by FreshCode
    I'm designing the 2nd major iteration of a relational database for a franchise's CRM (with lots of refactoring) and I need help on the best database design practices for storing job invoices and invoice lines with a strong audit trail of any changes made to each invoice. Current schema Invoices Table InvoiceId (int) // Primary key JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Comments (nvarchar(MAX)) InvoiceLines Table LineId (int) // Primary key InvoiceId (int) // related to Invoices above Quantity (decimal(9,4)) Title (nvarchar(512)) Comment (nvarchar(512)) UnitPrice (smallmoney) Revision schema InvoiceRevisions Table RevisionId (int) // Primary key InvoiceId (int) JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Total (smallmoney) Schema design considerations 1. Is it sensible to store an invoice's Paid or Pending status? All payments received for an invoice are stored in a Payments table (eg. Cash, Credit Card, Cheque, Bank Deposit). Is it meaningful to store a "Paid" status in the Invoices table if all the income related to a given job's invoices can be inferred from the Payments table? 2. How to keep track of invoice line item revisions? I can track revisions to an invoice by storing status changes along with the invoice total and the auditing user in an invoice revision table (see InvoiceRevisions above), but keeping track of an invoice line revision table feels hard to maintain. Thoughts? 3. Tax How should I incorporate sales tax (or 14% VAT in SA) when storing invoice data?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Reading component parameters and setting defaults

    - by donut
    I'm pulling my hair on this because it should be simple, but can't get it to do the right thing. What's the best practice way of doing the following I've created a custom component that extends <s:Label> so now the Label has additional properties color2 color3 value which can be called like this and are used in the skin. <s:CustomLabel text="Some text" color2="0x939202" color3="0x999999" value="4.5" /> If I don't supply these parameters, I'd like some defaults to be set. So far I've had some success, but it doesn't work 100% of the time, which leads me to think that I'm not following best practices when setting those defaults. I do it now like this: [Bindable] private var myColor2:uint = 0x000000; [Bindable] private var myColor3:uint = 0x000000; [Bindable] private var myValue:Number = 10.0; then in the init function, I do a variation of this to set the default myValue = hostComponent.value; myValue = (hostComponent.value) ? hostComponent.value : 4.5; Sometimes it works, sometimes it doesn't, depending on the type of variable I'm trying to set. I eventually decided to read them as Strings then convert them to the desired type, but it seems that this also works half the time.

    Read the article

  • Rails: Create method available in all views and all models

    - by smotchkkiss
    I'd like to define a method that is available in both my views and my models Say I have a view helper: def foo(s) "hello #{s}" end A view might use the helper like this: <div class="data"><%= foo(@user.name) %></div> However, this <div> will be updated with a repeating ajax call. I'm using a to_json call in a controller returns data like so: render :text => @item.to_json(:only => [...], :methods => [:foo]) This means, that I have to have foo defined in my Item model as well: class Item def foo "hello #{name}" end end It'd be nice if I could have a DRY method that could be shared in both my views and my models. Usage might look like this: Helper def say_hello(s) "hello #{s}" end User.rb model def foo say_hello(name) end Item.rb model def foo say_hello(label) end View <div class="data"><%= item.foo %></div> Controller def observe @items = item.find(...) render :text => @items.to_json(:only=>[...], :methods=>[:foo]) end IF I'M DUMB, please let me know. I don't know the best way to handle this, but I don't want to completely go against best-practices here. If you can think of a better way, I'm eager to learn!

    Read the article

  • How can I cleanly turn a nested Perl hash into a non-nested one?

    - by knorv
    Assume a nested hash structure %old_hash .. my %old_hash; $old_hash{"foo"}{"bar"}{"zonk"} = "hello"; .. which we want to "flatten" (sorry if that's the wrong terminology!) to a non-nested hash using the sub &flatten(...) so that .. my %h = &flatten(\%old_hash); die unless($h{"zonk"} eq "hello"); The following definition of &flatten(...) does the trick: sub flatten { my $hashref = shift; my %hash; my %i = %{$hashref}; foreach my $ii (keys(%i)) { my %j = %{$i{$ii}}; foreach my $jj (keys(%j)) { my %k = %{$j{$jj}}; foreach my $kk (keys(%k)) { my $value = $k{$kk}; $hash{$kk} = $value; } } } return %hash; } While the code given works it is not very readable or clean. My question is two-fold: In what ways does the given code not correspond to modern Perl best practices? Be harsh! :-) How would you clean it up?

    Read the article

  • use of assertions for type checking in php?

    - by user151841
    I do some checking of arguments in my classes in php using exception-throwing functions. I have functions that do a basic check ( ===, in_array etc ) and throw an exception on false. So I can do assertNumeric($argument, "\$argument is not numeric."); instead of if ( ! is_numeric($argument) ) { throw new Exception("\$argument is not numeric."); } Saves some typing I was reading in the comments of the php manual page on assert() that As noted on Wikipedia - "assertions are primarily a development tool, they are often disabled when a program is released to the public." and "Assertions should be used to document logically impossible situations and discover programming errors— if the 'impossible' occurs, then something fundamental is clearly wrong. This is distinct from error handling: most error conditions are possible, although some may be extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is usually unwise: assertions do not allow for graceful recovery from errors, and an assertion failure will often halt the program's execution abruptly. Assertions also do not display a user-friendly error message." This means that the advice given by "gk at proliberty dot com" to force assertions to be enabled, even when they have been disabled manually, goes against best practices of only using them as a development tool So, am I 'doing it wrong'? What other/better ways of doing this are there?

    Read the article

  • How to target multiple versions of .NET Framework from MSBuild?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/sln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software. I notice that a lot of people use NAnt (e.g. Ninject builds many targets with NAnt) but I'm unsure if this is necessary or if they are just more familiar with it. I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • Turning a nested hash structure into a non-nested hash structure - is this the cleanest way to do it

    - by knorv
    Assume a nested hash structure %old_hash .. my %old_hash; $old_hash{"foo"}{"bar"}{"zonk"} = "hello"; .. which we want to "flatten" (sorry if that's the wrong terminology!) to a non-nested hash using the sub &flatten(...) so that .. my %h = &flatten(\%old_hash); die unless($h{"zonk"} eq "hello"); The following definition of &flatten(...) does the trick: sub flatten { my $hashref = shift; my %hash; my %i = %{$hashref}; foreach my $ii (keys(%i)) { my %j = %{$i{$ii}}; foreach my $jj (keys(%j)) { my %k = %{$j{$jj}}; foreach my $kk (keys(%k)) { my $value = $k{$kk}; $hash{$kk} = $value; } } } return %hash; } While the code given works it is not very readable or clean. My question is two-fold: In what ways does the given code not correspond to modern Perl best practices? Be harsh! :-) How would you clean it up?

    Read the article

  • Packaging reference documentation with jar file

    - by soren.enemaerke
    We are porting our .NET library to a java equivalent and is now looking at how to distribute this port. Packaging the classes into a jar-file seems like best practice and we would then ship this jar file in a zip along with some license terms. But what about the documentation? In .NET land it seems like best practice to distribute the xml file that can be consumed by tooling (Visual Studio) but we can't seem to find such best practices for java. We have javadoc comments on our public classes and interfaces, so we are just looking for a way to generate and distribute these comments in a way that is developer friendly (we're thinking easily consumed from various IDEs). What are developers expecting and how do you best deliver this? We would really prefer to bundle the documentation along with the jar file and not have to host the documentation on our website EDIT: We would like for our documentation to appear inside the java IDEs so we want to provide the documentation in a way that integrates into the IDEs as gracefully as possible. In .NET land this is as an xml file placed next to the .dll file, but is there a similar concept for jar files that enables the integration into tooling? PS: We are developing in Eclipse and have an ant task doing the building and jar-file packaing in our automated build.

    Read the article

  • Pass a data.frame column name to a function

    - by Kevin Middleton
    I'm trying to write a function to accept a data.frame (x) and a column from it. The function performs some calculations on x and later returns another data.frame. I'm stuck on the best-practices method to pass the column name to the function. The two minimal examples fun1 and fun2 below produce the desired result, being able to perform operations on x$column, using max() as an example. However, both rely on the seemingly (at least to me) inelegant (1) call to substitute() and possibly eval() and (2) the need to pass the column name as a character vector. fun1 <- function(x, column){ do.call("max", list(substitute(x[a], list(a = column)))) } fun2 <- function(x, column){ max(eval((substitute(x[a], list(a = column))))) } df <- data.frame(A = 1:20, B = rnorm(10)) fun1(df, "B") fun2(df, "B") I would like to be able to call the function as fun(df, B), for example. Other options I have considered but have not tried: Pass column as an integer of the column number. I think this would avoid substitute(). Ideally, the function could accept either. with(x, get(column)), but, even if it works, I think this would still require substitute Make use of formula() and match.call(), neither of which I have much experience with. Subquestion: Is do.call() preferred over eval()? Thanks, Kevin

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >