Search Results

Search found 36443 results on 1458 pages for 'table design'.

Page 22/1458 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How to change the state of a singleton in runtime

    - by user34401
    Consider I am going to write a simple file based logger AppLogger to be used in my apps, ideally it should be a singleton so I can call it via public class AppLogger { public static String file = ".."; public void logToFile() { // Write to file } public static log(String s) { AppLogger.getInstance().logToFile(s); } } And to use it AppLogger::log("This is a log statement"); The problem is, what is the best time I should provide the value of file since it is a just a singleton? Or how to refactor the above code (or skip using singleton) so I can customize the log file path? (Assume I don't need to write to multiple at the same time) p.s. I know I can use library e.g. log4j, but consider it is just a design question, how to refactor the code above?

    Read the article

  • Event Aggregator.. not getting a response, how to determine completion?

    - by Duncan_m
    I'm rewriting a vehicle tracking application, a google maps based thing.. The users are able to search for a vehicle by typing a few characters of the vehicles "callsign". My application is based around a sort of "event bus" within Backbone.. when a search occurs I send a message on the bus saying something like "does anyone match this?".. If a marker matches the search term it responds with a sort of "yes, I match!".. My challenge arises when no-one matches, I get no response.. it feels a little hacky to "wait a little while" and check if a response has been recieved.. The application is based around Backbone.js and using the Event Aggregator pattern described in the answer to this question on Stack Overflow: http://stackoverflow.com/questions/7708195/access-function-in-one-view-from-another-in-backbone-js Is there a well defined design pattern that might assist me here? Sending a request for a response and not getting any responses?

    Read the article

  • When designing an application around Model-View-Controller (MVC), what is in your toolbox?

    - by ericgorr
    There are a lot of great explanations for what the Model-View-Controller design pattern is, but I am having trouble finding good resources showing how to use it in practice. So, when you are starting a new application (doesn't matter what it is), what is in your toolbox? For example, it was suggested that using UML collaboration diagrams ( http://www.objectmentor.com/resources/articles/umlCollaborationDiagrams.pdf ) can be useful when designing an application around MVC, although, I am not certain exactly how or why this might be the case...? So, what is in your toolbox for MVC?

    Read the article

  • Distinguishing between sets of status reports

    - by user1769486
    I am working on an internal database monitoring system and am at a point where I sort of hit the wall in terms of application design. Basically I have an extensible plugin architecture where I shall have an OK, a warning or an error upon running a db verification. My first question whether it is sufficient to have only one status reported with an optional status message or provide the ability to have more than one returned (with attached messages) and then calculate an aggregated overall status. In particular in the latter case my second issue would be how to distinguish between two verification reports with the same status code (as it can come from different triggers). I would need to do this to see whether some change happened between the current and last verification. I could simply have string comparisons of the attached status messages mentioned above but that does not seem very reliable.

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • MySQL foreign key creation with alter table command

    - by user313338
    I created some tables using MySQL Workbench, and then did forward ‘forward engineer’ to create scripts to create these tables. BUT, the scripts lead me to a number of problems. One of which involves the foreign keys. So I tried creating separate foreign key additions using alter table and I am still getting problems. The code is below (the set statements, drop/create statements I left in … though I don’t think they should matter for this): SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL'; DROP SCHEMA IF EXISTS `mydb` ; CREATE SCHEMA IF NOT EXISTS `mydb` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci ; -- ----------------------------------------------------- -- Table `mydb`.`User` -- ----------------------------------------------------- DROP TABLE IF EXISTS `mydb`.`User` ; CREATE TABLE IF NOT EXISTS `mydb`.`User` ( `UserName` VARCHAR(35) NOT NULL , `Num_Accts` INT NOT NULL , `Password` VARCHAR(45) NULL , `Email` VARCHAR(45) NULL , `User_ID` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`User_ID`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`User_Space` -- ----------------------------------------------------- DROP TABLE IF EXISTS `mydb`.`User_Space` ; CREATE TABLE IF NOT EXISTS `mydb`.`User_Space` ( `User_UserName` VARCHAR(35) NOT NULL , `User_Space_ID` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`User_Space_ID`), FOREIGN KEY (`User_UserName`) REFERENCES `mydb`.`User` (`UserName`) ON UPDATE CASCADE ON DELETE CASCADE) ENGINE = InnoDB; SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; The error this produces is: Error Code: 1005 Can't create table 'mydb.user_space' (errno: 150) Anybody know what the heck I’m doing wrong?? And anybody else have problems with the script generation done by mysql workbench? It’s a nice tool, but annoying that it pumps out scripts that don’t work for me. [As an fyi here’s the script it auto-generates: SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL'; DROP SCHEMA IF EXISTS `mydb` ; CREATE SCHEMA IF NOT EXISTS `mydb` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci ; -- ----------------------------------------------------- -- Table `mydb`.`User` -- ----------------------------------------------------- DROP TABLE IF EXISTS `mydb`.`User` ; CREATE TABLE IF NOT EXISTS `mydb`.`User` ( `UserName` VARCHAR(35) NOT NULL , `Num_Accts` INT NOT NULL , `Password` VARCHAR(45) NULL , `Email` VARCHAR(45) NULL , `User_ID` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`User_ID`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`User_Space` -- ----------------------------------------------------- DROP TABLE IF EXISTS `mydb`.`User_Space` ; CREATE TABLE IF NOT EXISTS `mydb`.`User_Space` ( `User_Space_ID` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`User_Space_ID`) , INDEX `User_ID` () , CONSTRAINT `User_ID` FOREIGN KEY () REFERENCES `mydb`.`User` () ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; ** Thanks!]

    Read the article

  • Wrappers/law of demeter seems to be an anti-pattern...

    - by Robert Fraser
    I've been reading up on this "Law of Demeter" thing, and it (and pure "wrapper" classes in general) seem to generally be anti patterns. Consider an implementation class: class Foo { void doSomething() { /* whatever */ } } Now consider two different implementations of another class: class Bar1 { private static Foo _foo = new Foo(); public static Foo getFoo() { return _foo; } } class Bar2 { private static Foo _foo = new Foo(); public static void doSomething() { _foo.doSomething(); } } And the ways to call said methods: callingMethod() { Bar1.getFoo().doSomething(); // Version 1 Bar2.doSomething(); // Version 2 } At first blush, version 1 seems a bit simpler, and follows the "rule of Demeter", hide Foo's implementation, etc, etc. But this ties any changes in Foo to Bar. For example, if a parameter is added to doSomething, then we have: class Foo { void doSomething(int x) { /* whatever */ } } class Bar1 { private static Foo _foo = new Foo(); public static Foo getFoo() { return _foo; } } class Bar2 { private static Foo _foo = new Foo(); public static void doSomething(int x) { _foo.doSomething(x); } } callingMethod() { Bar1.getFoo().doSomething(5); // Version 1 Bar2.doSomething(5); // Version 2 } In both versions, Foo and callingMethod need to be changed, but in Version 2, Bar also needs to be changed. Can someone explain the advantage of having a wrapper/facade (with the exception of adapters or wrapping an external API or exposing an internal one).

    Read the article

  • dynamic behavior of factory class

    - by manu1001
    I have a factory class that serves out a bunch of properties. Now, the properties might come either from a database or from a properties file. This is what I've come up with. public class Factory { private static final INSTANCE = new Factory(source); private Factory(DbSource source) { // read from db, save properties } private Factory(FileSource source) { // read from file, save properties } // getInstance() and getProperties() here } What's a clean way of switching between these behaviors based on the environment. I want to avoid having to recompile the class each time.

    Read the article

  • QuestionOrAnswer model?

    - by Mark
    My site has Listings. Users can ask Questions about listings, and the author of the listing can respond with an Answer. However, the Answer might need clarification, so I've made them recursive (you can "answer" an answer). So how do I set up the database? The way I have it now looks like this (in Django-style models): class QuestionOrAnswer(Model): user = ForeignKey(User, related_name='questions') listing = ForeignKey(Listing, related_name='questions') parent = models.ForeignKey('self', null=True, blank=True, related_name='children') message = TextField() But what bugs me is that listing is now an attribute of the answers as well (it doesn't need to be). What happens if the database gets mangled and an answer belongs to a different listing than its parent question? That just doesn't make any sense. We can separate it with polymorphism: QuestionOrAnswer user message created updated Question(QuestionOrAnswer) shipment Answer(QuestionOrAnswer) parent = ForeignKey(QuestionOrAnswer) And that ought to work, but now ever question and answer is split into 2 tables. Is it worth this overhead for clearly defined models?

    Read the article

  • Help naming a class that has a single public method called Execute()

    - by devoured elysium
    I have designed the following class that should work kind of like a method (usually the user will just run Execute()): public abstract class ??? { protected bool hasFailed = false; protected bool hasRun = false; public bool HasFailed { get { return hasFailed; } } public bool HasRun { get { return hasRun; } } private void Restart() { hasFailed = false; hasRun = false; } public bool Execute() { ExecuteImplementation(); bool returnValue = hasFailed; Restart(); return returnValue; } protected abstract void ExecuteImplementation(); } My question is: how should I name this class? Runnable? Method(sounds awkward)?

    Read the article

  • "select * into table" Will it work for inserting data into existing table

    - by Shantanu Gupta
    I am trying to insert data from one of my existing table into another existing table. Is it possible to insert data into any existing table using select * into query. I think it can be done using union but in that case i need to record all data of my existing table into temporary table, then drop that table and finally than apply union to insert all records into same table eg. select * into #tblExisting from tblExisting drop table tblExisting select * into tblExisting from #tblExisting union tblActualData Here tblExisting is the table where I actually want to store all data tblActualData is the table from where data is to be appended to tblExisting. Is it right method. Do we have some other alternative ?

    Read the article

  • Updating query results

    - by Francisco Garcia
    Within a DDD and CQRS context, a query result is displayed as table rows. Whenever new rows are inserted or deleted, their positions must be calculated by comparing the previous query result with the most recent one. This is needed to visualize with an animation new or deleted rows. The model of my view contains an array of the displayed query results. But I need a place to compare its contents against the latest query. Right now I consider my model view part of my application layer, but the comparison of two query result sets seems something that must be done within the domain layer. Which component should cache a query result and which one compare them? Are view models (and their cached contents) supposed to be in the application layer?

    Read the article

  • Minimize useless tweaking of a numeric app

    - by Potatoswatter
    I'm developing a numeric application (nonlinear optimizer), with a zillion knobs to tweak and rising. It's not my first foray into this domain, but this time there are even more variables in the code and I'm on a tight schedule. Don't want to waste time fiddling. Days or even months can potentially be wasted adjusting variables, recompiling, and reprocessing benchmark datasets. The resulting data is viewed and trouble spots are checked. The overall quality of the solution is reported by the program but the meaning of the report could change over time. (Numeric units for the report are one thing I'm trying to nail down.) One main problem is organizing result files to identify each with specific code changes. Note taking can be a pain, is there software to help with this? Are there agreed best practices to making this kind of development cycle reliably move forward? The solver package converges to its optimal solution with mechanical determination, but I'm all too familiar with the way an excess of design decisions can mire development.

    Read the article

  • Prioritize compiler functionality/tasks, when designing a new language

    - by Mahdi
    Well, the question should be so hard to ask and I expect couple of down votes, however, I'm really interested to have your ideas and recommendations. :) I've already made a very simple compiler, with a few and limited functionality. Now I'm getting more on it to make it more like a real-world compiler. I definitely need to start over 'cause I've much more experience and ideas in this area rather a few years ago. So, I want to know, right now, from the very first step again, which tasks/features for the new compiler should implement first and which tasks has lower priority rather than others? For example, I'd say, first I'd go to decide about the object-oriented structure for the new language, but you might say, hey, just go for a compiler that could define a variable, when you finished that, then start thinking about OOP designs ... I prefer to hear the pros and cons for your suggestions also. Actually I like to start from Bottom to Top, where I could add simplest tasks first, and later adding more complex ones, but I'm totally open for any new ideas, and really appreciate that. Also please consider that I'm thinking about the design concepts. Actually I expect answers like: Priority from Highest to Lowest: variables, because .... functions, because .... loops, because .... ... Not: define a syntax for your new language, and start parsing your source code ...

    Read the article

  • Struggling with the Single Responsibility Principle

    - by AngryBird
    Consider this example: I have a website. It allows users to make posts (can be anything) and add tags that describe the post. In the code, I have two classes that represent the post and tags. Lets call these classes Post and Tag. Post takes care of creating posts, deleting posts, updating posts, etc. Tag takes care of creating tags, deleting tags, updating tags, etc. There is one operation that is missing. The linking of tags to posts. I am struggling with who should do this operation. It could fit equally well in either class. On one hand, the Post class could have a function that takes a Tag as a parameter, and then stores it in a list of tags. On the other hand, the Tag class could have a function that takes a Post as a parameter and links the Tag to the Post. The above is just an example of my problem. I am actually running into this with multiple classes that are all similar. It could fit equally well in both. Short of actually putting the functionality in both classes, what conventions or design styles exist to help me solve this problem. I am assuming there has to be something short of just picking one? Maybe putting it in both classes is the correct answer?

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

  • Implementing a ILogger interface to log data

    - by Jon
    I have a need to write data to file in one of my classes. Obviously I will pass an interface into my class to decouple it. I was thinking this interface will be used for testing and also in other projects. This is my interface: //This could be used by filesystem, webservice public interface ILogger { List<string> PreviousLogRecords {get;set;} void Log(string Data); } public interface IFileLogger : ILogger { string FilePath; bool ValidFileName; } public class MyClassUnderTest { public MyClassUnderTest(IFileLogger logger) {....} } [Test] public void TestLogger() { var mock = new Mock<IFileLogger>(); mock.Setup(x => x.Log(Is.Any<string>).AddsDataToList()); //Is this possible?? var myClass = new MyClassUnderTest(mock.Object); myClass.DoSomethingThatWillSplitThisAndLog3Times("1,2,3"); Assert.AreEqual(3,mock.PreviousLogRecords.Count); } This won't work I don't believe as nothing is storing the items so is this possible using Moq and also what do you think of the design of the interface?

    Read the article

  • Handling Types for Real and Complex Matrices in a BLAS Wrapper

    - by mga
    I come from a C background and I'm now learning OOP with C++. As an exercise (so please don't just say "this already exists"), I want to implement a wrapper for BLAS that will let the user write matrix algebra in an intuitive way (e.g. similar to MATLAB) e.g.: A = B*C*D.Inverse() + E.Transpose(); My problem is how to go about dealing with real (R) and complex (C) matrices, because of C++'s "curse" of letting you do the same thing in N different ways. I do have a clear idea of what it should look like to the user: s/he should be able to define the two separately, but operations would return a type depending on the types of the operands (R*R = R, C*C = C, R*C = C*R = C). Additionally R can be cast into C and vice versa (just by setting the imaginary parts to 0). I have considered the following options: As a real number is a special case of a complex number, inherit CMatrix from RMatrix. I quickly dismissed this as the two would have to return different types for the same getter function. Inherit RMatrix and CMatrix from Matrix. However, I can't really think of any common code that would go into Matrix (because of the different return types). Templates. Declare Matrix<T> and declare the getter function as T Get(int i, int j), and operator functions as Matrix *(Matrix RHS). Then specialize Matrix<double> and Matrix<complex>, and overload the functions. Then I couldn't really see what I would gain with templates, so why not just define RMatrix and CMatrix separately from each other, and then overload functions as necessary? Although this last option makes sense to me, there's an annoying voice inside my head saying this is not elegant, because the two are clearly related. Perhaps I'm missing an appropriate design pattern? So I guess what I'm looking for is either absolution for doing this, or advice on how to do better.

    Read the article

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • Best practice to collect information from child objects

    - by Markus
    I'm regularly seeing the following pattern: public abstract class BaseItem { BaseItem[] children; // ... public void DoSomethingWithStuff() { StuffCollection collection = new StuffCollection(); foreach(child c : children) c.AddRequiredStuff(collection); // do something with the collection ... } public abstract void AddRequiredStuff(StuffCollection collection); } public class ConcreteItem : BaseItem { // ... public override void AddRequiredStuff(StuffCollection collection) { Stuff stuff; // ... collection.Add(stuff); } } Where I would use something like this, for better information hiding: public abstract class BaseItem { BaseItem[] children; // ... public void DoSomethingWithStuff() { StuffCollection collection = new StuffCollection(); foreach(child c : children) collection.AddRange(c.RequiredStuff()); // do something with the collection ... } public abstract StuffCollection RequiredStuff(); } public class ConcreteItem : BaseItem { // ... public override StuffCollection RequiredStuff() { StuffCollection stuffCollection; Stuff stuff; // ... stuffCollection.Add(stuff); return stuffCollection; } } What are pros and cons of each solution? For me, giving the implementation access to parent's information is some how disconcerting. On the other hand, initializing a new list, just to collect the items is a useless overhead ... What is the better design? How would it change, if DoSomethingWithStuff wouldn't be part of BaseItem but a third class? PS: there might be missing semicolons, or typos; sorry for that! The above code is not meant to be executed, but just for illustration.

    Read the article

  • Classes as a compilation unit

    - by Yannbane
    If "compilation unit" is unclear, please refer to this. However, what I mean by it will be clear from the context. Edit: my language allows for multiple inheritance, unlike Java. I've started designing+developing my own programming language for educational, recreational, and potentially useful purposes. At first, I've decided to base it off Java. This implied that I would have all the code be written inside classes, and that code compiles to classes, which are loaded by the VM. However, I've excluded features such as interfaces and abstract classes, because I found no need for them. They seemed to be enforcing a paradigm, and I'd like my language not to do that. I wanted to keep the classes as the compilation unit though, because it seemed convenient to implement, familiar, and I just liked the idea. Then I noticed that I'm basically left with a glorified module system, where classes could be used either as "namespaces", providing constants and functions using the static directive, or as templates for objects that need to be instantiated ("actual" purpose of classes in other languages). Now I'm left wondering: what are the benefits of having classes as compilation units? (Also, any general commentary on my design would be much appreciated.)

    Read the article

  • DB Object passing between classes singleton, static or other?

    - by Stephen
    So I'm designing a reporting system at work it's my first project written OOP and I'm stuck on the design choice for the DB class. Obviously I only want to create one instance of the DB class per-session/user and then pass it to each of the classes that need it. What I don't know it what's best practice for implementing this. Currently I have code like the following:- class db { private $user = 'USER'; private $pass = 'PASS'; private $tables = array( 'user','report', 'etc...'); function __construct(){ //SET UP CONNECTION AND TABLES } }; class report{ function __construct ($params = array(), $db, $user) { //Error checking/handling trimed //$db is the database object we created $this->db = $db; //$this->user is the user object for the logged in user $this->user = $user; $this->reportCreate(); } public function setPermission($permissionId = 1) { //Note the $this->db is this the best practise solution? $this->db->permission->find($permissionId) //Note the $this->user is this the best practise solution? $this->user->checkPermission(1) $data=array(); $this->db->reportpermission->insert($data) } };//end report I've been reading about using static classes and have just come across Singletons (though these appear to be passé already?) so what's current best practice for doing this?

    Read the article

  • Tips for Making this Code Testable [migrated]

    - by Jesse Bunch
    So I'm writing an abstraction layer that wraps a telephony RESTful service for sending text messages and making phone calls. I should build this in such a way that the low-level provider, in this case Twilio, can be easily swapped without having to re-code the higher level interactions. I'm using a package that is pre-built for Twilio and so I'm thinking that I need to create a wrapper interface to standardize the interaction between the Twilio service package and my application. Let us pretend that I cannot modify this pre-built package. Here is what I have so far (in PHP): <?php namespace Telephony; class Provider_Twilio implements Provider_Interface { public function send_sms(Provider_Request_SMS $request) { if (!$request->is_valid()) throw new Provider_Exception_InvalidRequest(); $sms = \Twilio\Twilio::request('SmsMessage'); $response = $sms->create(array( 'To' => $request->to, 'From' => $request->from, 'Body' => $request->body )); if ($this->_did_request_fail($response)) { throw new Provider_Exception_RequestFailed($response->message); } $response = new Provider_Response_SMS(TRUE); return $response; } private function _did_request_fail($api_response) { return isset($api_response->status); } } So the idea is that I can write another file like this for any other telephony service provided that it implements Provider_Interface making them swappable. Here are my questions: First off, do you think this is a good design? How could it be improved? Second, I'm having a hard time testing this because I need to mock out the Twilio package so that I'm not actually depending on Twilio's API for my tests to pass or fail. Do you see any strategy for mocking this out? Thanks in advance for any advice!

    Read the article

  • MySQL ALTER TABLE on very large table - is it safe to run it?

    - by Timothy Mifsud
    I have a MySQL database with one particular MyISAM table of above 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then perform the following statement: ALTER TABLE x ORDER BY PK DESC i.e. I order the table in question by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory), but, even though 3 times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup. I have started to wonder whether a 512MB server can cope with that statement (on such a large table) as I have read that a temporary table is created to perform the ALTER TABLE command?! And, if it can be safely run, what should be the expected time for the alteration of the table? Thanks in advance, Tim

    Read the article

  • How to obtain a random sub-datatable from another data table

    - by developerit
    Introduction In this article, I’ll show how to get a random subset of data from a DataTable. This is useful when you already have queries that are filtered correctly but returns all the rows. Analysis I came across this situation when I wanted to display a random tag cloud. I already had the query to get the keywords ordered by number of clicks and I wanted to created a tag cloud. Tags that are the most popular should have more chance to get picked and should be displayed larger than less popular ones. Implementation In this code snippet, there is everything you need. ' Min size, in pixel for the tag Private Const MIN_FONT_SIZE As Integer = 9 ' Max size, in pixel for the tag Private Const MAX_FONT_SIZE As Integer = 14 ' Basic function that retreives Tags from a DataBase Public Shared Function GetTags() As MediasTagsDataTable ' Simple call to the TableAdapter, to get the Tags ordered by number of clicks Dim dt As MediasTagsDataTable = taMediasTags.GetDataValide ' If the query returned no result, return an empty DataTable If dt Is Nothing OrElse dt.Rows.Count < 1 Then Return New MediasTagsDataTable End If ' Set the font-size of the group of data ' We are dividing our results into sub set, according to their number of clicks ' Example: 10 results -> [0,2] will get font size 9, [3,5] will get font size 10, [6,8] wil get 11, ... ' This is the number of elements in one group Dim groupLenth As Integer = CType(Math.Floor(dt.Rows.Count / (MAX_FONT_SIZE - MIN_FONT_SIZE)), Integer) ' Counter of elements in the same group Dim counter As Integer = 0 ' Counter of groups Dim groupCounter As Integer = 0 ' Loop througt the list For Each row As MediasTagsRow In dt ' Set the font-size in a custom column row.c_FontSize = MIN_FONT_SIZE + groupCounter ' Increment the counter counter += 1 ' If the group counter is less than the counter If groupLenth <= counter Then ' Start a new group counter = 0 groupCounter += 1 End If Next ' Return the new DataTable with font-size Return dt End Function ' Function that generate the random sub set Public Shared Function GetRandomSampleTags(ByVal KeyCount As Integer) As MediasTagsDataTable ' Get the data Dim dt As MediasTagsDataTable = GetTags() ' Create a new DataTable that will contains the random set Dim rep As MediasTagsDataTable = New MediasTagsDataTable ' Count the number of row in the new DataTable Dim count As Integer = 0 ' Random number generator Dim rand As New Random() While count < KeyCount Randomize() ' Pick a random row Dim r As Integer = rand.Next(0, dt.Rows.Count - 1) Dim tmpRow As MediasTagsRow = dt(r) ' Import it into the new DataTable rep.ImportRow(tmpRow) ' Remove it from the old one, to be sure not to pick it again dt.Rows.RemoveAt(r) ' Increment the counter count += 1 End While ' Return the new sub set Return rep End Function Pro’s This method is good because it doesn’t require much work to get it work fast. It is a good concept when you are working with small tables, let says less than 100 records. Con’s If you have more than 100 records, out of memory exception may occur since we are coping and duplicating rows. I would consider using a stored procedure instead.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >