Search Results

Search found 28590 results on 1144 pages for 'best pactice'.

Page 85/1144 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Logic inside an enum

    - by Vivin Paliath
    My colleagues and I were having a discussion regarding logic in enums. My personal preference is to not have any sort of logic in Java enums (although Java provides the ability to do that). The discussion in this cased centered around having a convenience method inside the enum that returned a map: public enum PackageTypes { Letter("01", "Letter"), .. .. Tube("02", "Packaging Tube"); private String packageCode; private String packageDescription; .. .. public static Map<String, String> toMap() { Map<String, String> map = new LinkedHashMap<String, String>(); for(PackageType packageType : PackageType.values()) { map.put(packageType.getPackageCode(), packageType.getPackageDescription()); } return map; } } My personal preference is to pull this out into a service. The argument for having the method inside the enum centered around convenience. The idea was that you don't have to go to a service to get it, but can query the enum directly. My argument centered around separation of concern and abstracting any kind of logic out to a service. I didn't think "convenience" was a strong argument to put this method inside an enum. From a best-practices perspective, which one is better? Or does it simply come down to a matter of personal preference and code style?

    Read the article

  • Help improving a simple assembly function

    - by MPelletier
    I just handed in this function in an assignment. It is done (hence no homework tag). But I would like to see how this can be improved. Essentially, the function sums the squares of all the integers between 1 and the given number, using the following formula: n(n+1)(2n+1)/6 Where n is the maximum number. The function below is made to catch any overflow and return 0 should any occur. UInt32 sumSquares(const UInt32 number) { int result = 0; __asm { mov eax, number //move number in eax mov edx, 2 //move 2 in edx mul edx //multiply (2n) jo end //jump to end if overflow add eax, 1 //addition (2n+1) jo end //jump to end if overflow mov ecx, eax //move (2n+1) in ecx mov ebx, number //move number in ebx add ebx, 1 //addition (n+1) jo end //jump to end if overflow mov eax, number //move number in eax for multiplication mul ebx //multiply n(n+1) jo end //jump to end if overflow mul ecx //multiply n(n+1)(2n+1) jo end //jump to end if overflow mov ebx, 6 //move 6 in ebx div ebx //divide by 6, the result will be in eax mov result, eax //move eax in result end: } return result; } Basically, I want to know what I can improve in there. In terms of best-practices mostly. One thing sounds obvious: smarter overflow check (with a single check for whatever maximum input would cause an overflow).

    Read the article

  • Large Django application layout

    - by Rob Golding
    I am in a team developing a web-based university portal, which will be based on Django. We are still in the exploratory stages, and I am trying to find the best way to lay the project/development environment out. My initial idea is to develop the system as a Django "app", which contains sub-applications to separate out the different parts of the system. The reason I intended to make these "sub" applications is that they would not have any use outside the parent application whatsoever, so there would be little point in distributing them separately. We envisage that the portal will be installed in multiple locations (at different universities, for example) so the main app can be dropped into a number of Django projects to install it. We therefore have a different repository for each location's project, which is really just a settings.py file defining the installed portal applications, and a urls.py routing the urls to it. I have started to write some initial code, though, and I've come up against a problem. Some of the code that handles user authentication and profiles seems to be without a home. It doesn't conceptually belong in the portal application as it doesn't relate to the portal's functionality. It also, however, can't go in the project repository - as I would then be duplicating the code over each location's repository. If I then discovered a bug in this code, for example, I would have to manually replicate the fix over all of the location's project files. My idea for a fix is to make all the project repos a fork of a "master" location project, so that I can pull any changes from that master. I think this is messy though, and it means that I have one more repository to look after. I'm looking for a better way to achieve this project. Can anyone recommend a solution or a similar example I can take a look at? The problem seems to be that I am developing a Django project rather than just a Django application.

    Read the article

  • Metaprogramming - self explanatory code - tutorials, articles, books

    - by elena
    Hello everybody, I am looking into improving my programming skils (actually I try to do my best to suck less each year, as our Jeff Atwood put it), so I was thinking into reading stuff about metaprogramming and self explanatory code. I am looking for something like an idiot's guide to this (free books for download, online resources). Also I want more than your average wiki page and also something language agnostic or preferably with Java examples. Do you know of such resources that will allow to efficiently put all of it into practice (I know experience has a lot to say in all of this but i kind of want to build experience avoiding the flow bad decisions - experience - good decisions)? EDIT: Something of the likes of this example from the Pragmatic Programmer: ...implement a mini-language to control a simple drawing package... The language consists of single-letter commands. Some commands are followed by a single number. For example, the following input would draw a rectangle: P 2 # select pen 2 D # pen down W 2 # draw west 2cm N 1 # then north 1 E 2 # then east 2 S 1 # then back south U # pen up Thank you!

    Read the article

  • Cast then check or check then cast?

    - by jamesrom
    Which method is regarded as best practice? Cast first? public string Describe(ICola cola) { var coke = cola as CocaCola; if (coke != null) { string result; // some unique coca-cola only code here. return result; } var pepsi = cola as Pepsi; if (pepsi != null) { string result; // some unique pepsi only code here. return result; } } Or should I check first, cast later? public string Describe(ICola cola) { if (cola is CocaCola) { coke = (CocaCola) cola; string result; // some unique coca-cola only code here. return result; } if (cola is Pepsi) { pepsi = (Pepsi) cola; string result; // some unique pepsi only code here. return result; } } Can you see any other way to do this?

    Read the article

  • When to use custom exceptions vs. existing exceptions vs. generic exceptions

    - by Ryan Elkins
    I'm trying to figure out what the correct form of exceptions to throw would be for a library I am writing. One example of what I need to handle is logging a user in to a station. They do this by scanning a badge. Possible things that could go wrong include: Their badge is deactivated They don't have permission to work at this station The badge scanned does not exist in the system They are already logged in to another station elsewhere The database is down Internal DB error (happens sometimes if the badge didn't get set up correctly) An application using this library will have to handle these exceptions one way or another. It's possible they may decide to just say "Error" or they may want to give the user more useful information. What's the best practice in this situation? Create a custom exception for each possibility? Use existing exceptions? Use Exception and pass in the reason (throw new Exception("Badge is deactivated.");)? I'm thinking it's some sort of mix of the first two, using existing exceptions where applicable, and creating new ones where needed (and grouping exceptions where it makes sense).

    Read the article

  • Deploying software on compromised machines

    - by Martin
    I've been involved in a discussion about how to build internet voting software for a general election. We've reached a general consensus that there exist plenty of secure methods for two way authentication and communication. However, someone came along and pointed out that in a general election some of the machines being used are almost certainly going to be compromised. To quote: Let me be an evil electoral fraudster. I want to sample peoples votes as they vote and hope I get something scandalous. I hire a bot-net from some really shady dudes who control 1000 compromised machines in the UK just for election day. I capture the voting habits of 1000 voters on election day. I notice 5 of them have voted BNP. I look these users up and check out their machines, I look through their documents on their machine and find out their names and addresses. I find out one of them is the wife of a tory MP. I leak 'wife of tory mp is a fascist!' to some blogger I know. It hits the internet and goes viral, swings an election. That's a serious problem! So, what are the best techniques for running software where user interactions with the software must be kept secret, on a machine which is possibly compromised?

    Read the article

  • Give WPF design mode default objects

    - by Janko R
    In my application I have <Rectangle.Margin> <MultiBinding Converter="{StaticResource XYPosToThicknessConverter}"> <Binding Path="XPos"/> <Binding Path="YPos"/> </MultiBinding> </Rectangle.Margin> The Data Context is set during runtime. The application works, but the design window in VS does not show a preview but System.InvalidCastException. That’s why I added a default object in the XYPosToThicknessConverter which is ugly. class XYPosToThicknessConverter : IMultiValueConverter { public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // stupid check to give the design window its default object. if (!(values[0] is IConvertible)) return new System.Windows.Thickness(3, 3, 0, 0); // useful code and exception throwing starts here // ... } } My Questions: What does VS/the process that builds the design window pass to XYPosToThicknessConverter and what is way to find it out by myself. How do I change my XAML code, so that the design window gets its default object and is this the best way to handle this problem? I’m using VS2010RC with Net4.0

    Read the article

  • Using ThreadPool.QueueUserWorkItem in ASP.NET in a high traffic scenario

    - by Michael Hart
    I've always been under the impression that using the ThreadPool for (let's say non-critical) short-lived background tasks was considered best practice, even in ASP.NET, but then I came across this article that seems to suggest otherwise - the argument being that you should leave the ThreadPool to deal with ASP.NET related requests. So here's how I've been doing small asynchronous tasks so far: ThreadPool.QueueUserWorkItem(s => PostLog(logEvent)) And the article is suggesting instead to create a thread explicitly, similar to: new Thread(() => PostLog(logEvent)){ IsBackground = true }.Start() The first method has the advantage of being managed and bounded, but there's the potential (if the article is correct) that the background tasks are then vying for threads with ASP.NET request-handlers. The second method frees up the ThreadPool, but at the cost of being unbounded and thus potentially using up too many resources. So my question is, is the advice in the article correct? If your site was getting so much traffic that your ThreadPool was getting full, then is it better to go out-of-band, or would a full ThreadPool imply that you're getting to the limit of your resources anyway, in which case you shouldn't be trying to start your own threads? Clarification: I'm just asking in the scope of small non-critical asynchronous tasks (eg, remote logging), not expensive work items that would require a separate process (in these cases I agree you'll need a more robust solution).

    Read the article

  • Proper way to implement IXmlSerializable?

    - by Greg
    Once a programmer decides to implement IXmlSerializable, what are the rules and best practices for implementing it? I've heard that GetSchema() should return null and ReadXml should move to the next element before returning. Are these true? And what about WriteXml: should it write a root element for the object or is it assumed that the root is already written? How should child objects be treated and written. Here's a sample of what I have now. I'll update it as I get good responses. public class Calendar: IEnumerable<Gvent>, IXmlSerializable { public XmlSchema GetSchema() { return null; } public void ReadXml(XmlReader reader) { if (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Calendar") { _Name = reader["Name"]; _Enabled = Boolean.Parse(reader["Enabled"]); _Color = Color.FromArgb(Int32.Parse(reader["Color"])); if (reader.ReadToDescendant("Event")) { while (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Event") { var evt = new Event(); evt.ReadXml(reader); _Events.Add(evt); } } reader.Read(); } } public void WriteXml(XmlWriter writer) { writer.WriteAttributeString("Name", _Name); writer.WriteAttributeString("Enabled", _Enabled.ToString()); writer.WriteAttributeString("Color", _Color.ToArgb().ToString()); foreach (var evt in _Events) { writer.WriteStartElement("Event"); evt.WriteXml(writer); writer.WriteEndElement(); } } } public class Event : IXmlSerializable { public XmlSchema GetSchema() { return null; } public void ReadXml(XmlReader reader) { if (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Event") { _Title = reader["Title"]; _Start = DateTime.FromBinary(Int64.Parse(reader["Start"])); _Stop = DateTime.FromBinary(Int64.Parse(reader["Stop"])); reader.Read(); } } public void WriteXml(XmlWriter writer) { writer.WriteAttributeString("Title", _Title); writer.WriteAttributeString("Start", _Start.ToBinary().ToString()); writer.WriteAttributeString("Stop", _Stop.ToBinary().ToString()); } }

    Read the article

  • Nested dereferencing arrows in Perl: to omit or not to omit?

    - by DVK
    In Perl, when you have a nested data structure, it is permissible to omit de-referencing arrows to 2d and more level of nesting. In other words, the following two syntaxes are identical: my $hash_ref = { 1 => [ 11, 12, 13 ], 3 => [31, 32] }; my $elem1 = $hash_ref->{1}->[1]; my $elem2 = $hash_ref->{1}[1]; # exactly the same as above Now, my question is, is there a good reason to choose one style over the other? It seems to be a popular bone of stylistic contention (Just on SO, I accidentally bumped into this and this in the space of 5 minutes). So far, none of the usual suspects says anything definitive: perldoc merely says "you are free to omit the pointer dereferencing arrow". Conway's "Perl Best Practices" says "whenever possible, dereference with arrows", but it appears to only apply to the context of dereferencing the main reference, not optional arrows on 2d level of nested data structures. "MAstering Perl for Bioinfirmatics" author James Tisdall doesn't give very solid preference either: "The sharp-witted reader may have noticed that we seem to be omitting arrow operators between array subscripts. (After all, these are anonymous arrays of anonymous arrays of anonymous arrays, etc., so shouldn't they be written [$array-[$i]-[$j]-[$k]?) Perl allows this; only the arrow operator between the variable name and the first array subscript is required. It make things easier on the eyes and helps avoid carpal tunnel syndrome. On the other hand, you may prefer to keep the dereferencing arrows in place, to make it clear you are dealing with references. Your choice." Personally, i'm on the side of "always put arrows in, since itg's more readable and obvious tiy're dealing with a reference".

    Read the article

  • Databinding in WinForms performing async data import

    - by burnside
    I have a scenario where I have a collection of objects bound to a datagrid in winforms. If a user drags and drops an item on to the grid, I need to add a placeholder row into the grid and kick off a lengthy async import process. I need to communicate the status of the async import process back to the UI, updating the row in the grid and have the UI remain responsive to allow the user to edit the other rows. What's the best practice for doing this? My current solution is: binding a thread safe implementation of BindingList to the grid, filled with the objects that are displayed as rows in the grid. When a user drags and drops an item on to the grid, I create a new object containing the sparse info obtained from the dropped item and add that to the BindingList, disabling the editing of that row. I then fire off a separate thread to do the import, passing it the newly bound object I have just created to fill with data. The import process, periodically sets the status of the object and fires an event which is subscribed to by the UI telling it to refresh the grid to see the new properties on the object. Should I be passing the same object that is bound to the grid to the import process thread to operate on, or should I be creating a copy and merging back the changes to the object on the UI thread using BeginInvoke? Any problems or advice with this implementation? Thanks

    Read the article

  • Correct OOP design without getters?

    - by kane77
    I recently read that getters/setters are evil and I have to say it makes sense, yet when I started learning OOP one of the first things I learned was "Encapsulate your fields" so I learned to create class give it some fields, create getters, setters for them and create constructor where I initialize these fields. And every time some other class needs to manipulate this object (or for instance display it) I pass it the object and it manipulate it using getters/setters. I can see problems with this approach. But how to do it right? For instance displaying/rendering object that is "data" class - let's say Person, that has name and date of birth. Should the class have method for displaying the object where some Renderer would be passed as an argument? Wouldn't that violate principle that class should have only one purpose (in this case store state) so it should not care about presentation of this object. Can you suggest some good resources where best practices in OOP design are presented? I'm planning to start a project in my spare time and I want it to be my learning project in correct OOP design..

    Read the article

  • Accommodating hierarchical data in SQL Server 2005 database design

    - by Remnant
    Context I am fairly new to database design (=know the basics) and am grappling with how best to design my database for a project I am currently working on. In short, my database will keep a log of which employees have attended certain health and safety courses throughout the year. There are multiple types of course e.g. moving objects, fire safety, hygiene etc. In terms of my database design I need to accommodate the following: Each location can have multiple divisions Each division can have multiple departments Each department can have multiple functions Each function can have multiple job roles Each job role can have different course requirements Also note that the structure at each location may not be the same e.g. the departments within divisions are not the same across locations and the functions within departments may also differ. Edit - updated to better articulate problem Let's assume I am just looking at Location, Division and Department and I have my database as follows: LocationTable DivisionTable DepartmentTable LocationID(PK) DivisionID(PK) DepartmentID(PK) LocationName DivisionName DepartmentName There is a many-to-many relationship between Locations and Divisions and also between Departments and Divisions. Suppose I set up a 'Junction Table' as follows: Location_Division LocationID(FK) DivisionID(FK) Using Location_Division I could easily pull back the Divisions for any Location. However, suppose I want to pull back all departments for a given Division in a given Location. If I set up another 'Junction Table' for Division and Department then I can't see how I would differentiate Division by Location? Division_Department DivisionID(FK) DepartmentID(FK) Location_Division Division_Department LocationID DivisionID DivisionID DepartmentID 1 1 1 1 1 2 1 2 2 1 2 1 2 2 2 2 Do I need to expand the number of columns in my 'Junction Table' e.g. Location_Division_Department LocationID(FK) DivisionID(FK) DepartmentID(FK) Location_Division_Department LocationID DivisionID DepartmentID 1 1 1 1 1 2 1 1 3 2 1 1 2 1 2 2 1 3

    Read the article

  • SQL Server 2005 database design - many-to-many relationships with hierarchy

    - by Remnant
    Note I have completely re-written my original post to better explain the issue I am trying to understand. I have tried to generalise the problem as much as possible. Also, my thanks to the original people who responded. Hopefully this post makes things a little clearer. Context In short, I am struggling to understand the best way to design a small scale database to handle (what I perceive to be) multiple many-to-many relationships. Imagine the following scenario for a company organisational structure: Textile Division Marketing Division | | ---------------------- ---------------------- | | | | HR Dept Finance Dept HR Dept Finance Dept | | | | ---------- ---------- ---------- --------- | | | | | | | | Payroll Hiring Audit Tax Payroll Hiring Audit Accounts | | | | | | | | Emps Emps Emps Emps Emps Emps Emps Emps NB: Emps denotes a list of employess that work in that area When I first started with this issue I made four separate tables: Divisions - Textile, Marketing (PK = DivisionID) Departments - HR, Finance (PK = DeptID) Functions - Payroll, Hiring, Audit, Tax, Accounts (PK = FunctionID) Employees - List of all Employees (PK = EmployeeID) The problem as I see it is that there are multiple many-to-many relationships i.e. many departments have many divisions and many functions have many departments. Question Giving the database structure above, suppose I wanted to do the following: Get all employees who work in the Payroll function of the Marketing Division To do this I need to be able to differentiate between the two Payroll departments but I am not sure how this can be done? I understand that I could build a 'Link / Junction' table between Departments and Functions so that I can retrieve which Functions are in which Departments. However, I would still need to differentiate the Division they belong to. Research Effort As you can see I am an abecedarian when it comes to database deisgn. I have spent the last two days resaerching this issue, traversing nested set models, adjacency models, reading that this issue is known not to be NP complete etc. I am sure there is a simple solution?

    Read the article

  • Why doesn't every class in the .Net framework have a corresponding interface?

    - by Thorsten Lorenz
    Since I started to develop in a test/behavior driven style, I appreciated the ability to mock out every dependency. Since mocking frameworks like Moq work best when told to mock an interface, I now implement an interface for almost every class I create b/c most likely I will have to mock it out in a test eventually. Well, and programming to an interface is good practice, anyways. At times, my classes take dependencies on .Net classes (e.g. FileSystemWatcher, DispatcherTimer). It would be great in that case to have an interface, so I could depend on an IDispatcherTimer instead, to be able to pass it a mock and simulate its behavior to see if my system under test reacts correctly. Unfortunately both of above mentioned classes do not implement such interfaces, so I have to resort to creating adapters, that do nothing else but inherit from the original class and conform to an interface, that I then can use. Here is such an adapter for the DispatcherTimer and the corresponding interface: using System; using System.Windows.Threading; public interface IDispatcherTimer { #region Events event EventHandler Tick; #endregion #region Properties Dispatcher Dispatcher { get; } TimeSpan Interval { get; set; } bool IsEnabled { get; set; } object Tag { get; set; } #endregion #region Public Methods void Start(); void Stop(); #endregion } /// <summary> /// Adapts the DispatcherTimer class to implement the <see cref="IDispatcherTimer"/> interface. /// </summary> public class DispatcherTimerAdapter : DispatcherTimer, IDispatcherTimer { } Although this is not the end of the world, I wonder, why the .Net developers didn't take the minute to make their classes implement these interfaces from the get go. It puzzles me especially since now there is a big push for good practices from inside Microsoft. Does anyone have any (maybe inside) information why this contradiction exists?

    Read the article

  • Reference for proper handling of PID file on Unix

    - by bignose
    Where can I find a well-respected reference that details the proper handling of PID files on Unix? On Unix operating systems, it is common practice to “lock” a program (often a daemon) by use of a special lock file: the PID file. This is a file in a predictable location, often ‘/var/run/foo.pid’. The program is supposed to check when it starts up whether the PID file exists and, if the file does exist, exit with an error. So it's a kind of advisory, collaborative locking mechanism. The file contains a single line of text, being the numeric process ID (hence the name “PID file”) of the process that currently holds the lock; this allows an easy way to automate sending a signal to the process that holds the lock. What I can't find is a good reference on expected or “best practice” behaviour for handling PID files. There are various nuances: how to actually lock the file (don't bother? use the kernel? what about platform incompatibilities?), handling stale locks (silently delete them? when to check?), when exactly to acquire and release the lock, and so forth. Where can I find a respected, most-authoritative reference (ideally on the level of W. Richard Stevens) for this small topic?

    Read the article

  • The woes of (sometimes) storing "date only" in datetimes

    - by Heinzi
    We have two fields from and to (of type datetime), where the user can store the begin time and the end time of a business trip, e.g.: From: 2010-04-14 09:00 To: 2010-04-16 16:30 So, the duration of the trip is 2 days and 7.5 hours. Often, the exact times are not known in advance, so the user enters the dates without a time: From: 2010-04-14 To: 2010-04-16 Internally, this is stored as 2010-04-14 00:00 and 2010-04-16 00:00, since that's what most modern class libraries (e.g. .net) and databases (e.g. SQL Server) do when you store a "date only" in a datetime structure. Usually, this makes perfect sense. However, when entering 2010-04-16 as the to date, the user clearly did not mean 2010-04-16 00:00. Instead, the user meant 2010-04-16 24:00, i.e., calculating the duration of the trip should output 3 days, not 2 days. I can think of a few (more or less ugly) workarounds for this problem (add "23:59" in the UI layer of the to field if the user did not enter a time component; add a special "dates are full days" Boolean field; store "2010-04-17 00:00" in the DB but display "2010-04-16 24:00" to the user if the time component is "00:00"; ...), all having advantages and disadvantages. Since I assume that this is a fairly common problem, I was wondering: Is there a "standard" best-practice way of solving it? If there isn't, have you experienced a similar requirement, how did you solve it and what were the pros/cons of that solution?

    Read the article

  • Database design for invoices, invoice lines & revisions

    - by FreshCode
    I'm designing the 2nd major iteration of a relational database for a franchise's CRM (with lots of refactoring) and I need help on the best database design practices for storing job invoices and invoice lines with a strong audit trail of any changes made to each invoice. Current schema Invoices Table InvoiceId (int) // Primary key JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Comments (nvarchar(MAX)) InvoiceLines Table LineId (int) // Primary key InvoiceId (int) // related to Invoices above Quantity (decimal(9,4)) Title (nvarchar(512)) Comment (nvarchar(512)) UnitPrice (smallmoney) Revision schema InvoiceRevisions Table RevisionId (int) // Primary key InvoiceId (int) JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Total (smallmoney) Schema design considerations 1. Is it sensible to store an invoice's Paid or Pending status? All payments received for an invoice are stored in a Payments table (eg. Cash, Credit Card, Cheque, Bank Deposit). Is it meaningful to store a "Paid" status in the Invoices table if all the income related to a given job's invoices can be inferred from the Payments table? 2. How to keep track of invoice line item revisions? I can track revisions to an invoice by storing status changes along with the invoice total and the auditing user in an invoice revision table (see InvoiceRevisions above), but keeping track of an invoice line revision table feels hard to maintain. Thoughts? 3. Tax How should I incorporate sales tax (or 14% VAT in SA) when storing invoice data?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • How can I cleanly turn a nested Perl hash into a non-nested one?

    - by knorv
    Assume a nested hash structure %old_hash .. my %old_hash; $old_hash{"foo"}{"bar"}{"zonk"} = "hello"; .. which we want to "flatten" (sorry if that's the wrong terminology!) to a non-nested hash using the sub &flatten(...) so that .. my %h = &flatten(\%old_hash); die unless($h{"zonk"} eq "hello"); The following definition of &flatten(...) does the trick: sub flatten { my $hashref = shift; my %hash; my %i = %{$hashref}; foreach my $ii (keys(%i)) { my %j = %{$i{$ii}}; foreach my $jj (keys(%j)) { my %k = %{$j{$jj}}; foreach my $kk (keys(%k)) { my $value = $k{$kk}; $hash{$kk} = $value; } } } return %hash; } While the code given works it is not very readable or clean. My question is two-fold: In what ways does the given code not correspond to modern Perl best practices? Be harsh! :-) How would you clean it up?

    Read the article

  • C# - Which is more efficient and thread safe? static or instant classes?

    - by Soni Ali
    Consider the following two scenarios: //Data Contract public class MyValue { } Scenario 1: Using a static helper class. public class Broker { private string[] _userRoles; public Broker(string[] userRoles) { this._userRoles = userRoles; } public MyValue[] GetValues() { return BrokerHelper.GetValues(this._userRoles); } } static class BrokerHelper { static Dictionary<string, MyValue> _values = new Dictionary<string, MyValue>(); public static MyValue[] GetValues(string[] rolesAllowed) { return FilterForRoles(_values, rolesAllowed); } } Scenario 2: Using an instance class. public class Broker { private BrokerService _service; public Broker(params string[] userRoles) { this._service = new BrokerService(userRoles); } public MyValue[] GetValues() { return _service.GetValues(); } } class BrokerService { private Dictionary<string, MyValue> _values; private string[] _userRoles; public BrokerService(string[] userRoles) { this._userRoles = userRoles; this._values = new Dictionary<string, MyValue>(); } public MyValue[] GetValues() { return FilterForRoles(_values, _userRoles); } } Which of the [Broker] scenarios will scale best if used in a web environment with about 100 different roles and over a thousand users. NOTE: Feel free to sugest any alternative approach.

    Read the article

  • use of assertions for type checking in php?

    - by user151841
    I do some checking of arguments in my classes in php using exception-throwing functions. I have functions that do a basic check ( ===, in_array etc ) and throw an exception on false. So I can do assertNumeric($argument, "\$argument is not numeric."); instead of if ( ! is_numeric($argument) ) { throw new Exception("\$argument is not numeric."); } Saves some typing I was reading in the comments of the php manual page on assert() that As noted on Wikipedia - "assertions are primarily a development tool, they are often disabled when a program is released to the public." and "Assertions should be used to document logically impossible situations and discover programming errors— if the 'impossible' occurs, then something fundamental is clearly wrong. This is distinct from error handling: most error conditions are possible, although some may be extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is usually unwise: assertions do not allow for graceful recovery from errors, and an assertion failure will often halt the program's execution abruptly. Assertions also do not display a user-friendly error message." This means that the advice given by "gk at proliberty dot com" to force assertions to be enabled, even when they have been disabled manually, goes against best practices of only using them as a development tool So, am I 'doing it wrong'? What other/better ways of doing this are there?

    Read the article

  • C# - Removing event handlers - FormClosing event or Dispose() method

    - by Andy
    Suppose I have a form opened via the .ShowDialog() method. At some point I attach some event handlers to some controls on the form. e.g. // Attach radio button event handlers. this.rbLevel1.Click += new EventHandler(this.RadioButton_CheckedChanged); this.rbLevel2.Click += new EventHandler(this.RadioButton_CheckedChanged); this.rbLevel3.Click += new EventHandler(this.RadioButton_CheckedChanged); When the form closes, I need to remove these handlers, right? At present, I am doing this when the FormClosing event is fired. e.g. private void Foo_FormClosing(object sender, FormClosingEventArgs e) { // Detach radio button event handlers. this.rbLevel1.Click -= new EventHandler(this.RadioButton_CheckedChanged); this.rbLevel2.Click -= new EventHandler(this.RadioButton_CheckedChanged); this.rbLevel3.Click -= new EventHandler(this.RadioButton_CheckedChanged); } However, I have seen some examples where handlers are removed in the Dispose() method. Is there a 'best-practice' way of doing this? (Using C#, Winforms, .NET 2.0) Thanks.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >