Search Results

Search found 1591 results on 64 pages for 'implementations'.

Page 56/64 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • J: Self-reference in bubble sort tacit implementation

    - by Yasir Arsanukaev
    Hello people! Since I'm beginner in J I've decided to solve a simple task using this language, in particular implementing the bubblesort algorithm. I know it's not idiomatically to solve such kind of problem in functional languages, because it's naturally solved using array element transposition in imperative languages like C, rather than constructing modified list in declarative languages. However this is the code I've written: (((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # Let's apply it to an array: (((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # 5 3 8 7 2 2 3 5 7 8 The thing that confuses me is $: referring to the statement within the outermost parentheses. Help says that: $: denotes the longest verb that contains it. The other book (~ 300 KiB) says: 3+4 7 5*20 100 Symbols like + and * for plus and times in the above phrases are called verbs and represent functions. You may have more than one verb in a J phrase, in which case it is constructed like a sentence in simple English by reading from left to right, that is 4+6%2 means 4 added to whatever follows, namely 6 divided by 2. Let's rewrite my code snippet omitting outermost ()s: ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) ^: # 5 3 8 7 2 2 3 5 7 8 Reuslts are the same. I couldn't explain myself why this works, why only ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) is treated as the longest verb for $: but not the whole expression ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) ^: # and not just (<./@(2&{.)), $:@((>./@(2&{.)),2&}.), because if ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) is a verb, it should also form another verb after conjunction with #, i. e. one might treat the whole sentence (first snippet) as a verb. Probably there's some limit for the verb length limited by one conjunction. Look at the following code (from here): factorial =: (* factorial@<:) ^: (1&<) factorial 4 24 factorial within expression refers to the whole function, i. e. (* factorial@<:) ^: (1&<). Following this example I've used a function name instead of $:: bubblesort =: (((<./@(2&{.)), bubblesort@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # bubblesort 5 3 8 7 2 2 3 5 7 8 I expected bubblesort to refer to the whole function, but it doesn't seem true for me since the result is correct. Also I'd like to see other implementations if you have ones, even slightly refactored. Thanks.

    Read the article

  • Convert JSON flattened for forms back to an object

    - by George Jempty
    I am required (please therefore no nit-picking the requirement, I've already nit-picked it, and this is the req) to convert certain form fields that have "object nesting" embedded in the field names, back to the object(s) themselves. Below are some typical form field names: phones_0_patientPhoneTypeId phones_0_phone phones_1_patientPhoneTypeId phones_1_phone The form fields above were derived from an object such as the one toward the bottom (see "Data"), and that is the format of the object I need to reassemble. It can be assumed that any form field with a name that contains the underscore _ character needs to undergo this conversion. Also that the segment of the form field between underscores, if numeric, signifies a Javascript array, otherwise an object. I found it easy to devise a (somewhat naive) implementation for the "flattening" of the original object for use by the form, but am struggling going in the other direction; below the object/data below I'm pasting my current attempt. One problem (perhaps the only one?) with it is that it does not currently properly account for array indexes, but this might be tricky because the object will subsequently be encoded as JSON, which will not account for sparse arrays. So if "phones_1" exists, but "phones_0" does not, I would nevertheless like to ensure that a slot exists for phones[0] even if that value is null. Implementations that tweak what I have begun, or are entirely different, encouraged. If interested let me know if you'd like to see my code for the "flattening" part that is working. Thanks in advance Data: var obj = { phones: [{ "patientPhoneTypeId": 4, "phone": "8005551212" }, { "patientPhoneTypeId": 2, "phone": "8885551212" }]}; Code to date: var unflattened = {}; for (var prop in values) { if (prop.indexOf('_') > -1) { var lastUnderbarPos = prop.lastIndexOf('_'); var nestedProp = prop.substr(lastUnderbarPos + 1); var nesting = prop.substr(0, lastUnderbarPos).split("_"); var nestedRef, isArray, isObject; for (var i=0, n=nesting.length; i<n; i++) { if (i===0) { nestedRef = unflattened; } if (i < (n-1)) { // not last if (/^\d+$/.test(nesting[i+1])) { isArray = true; isObject = false; } else { isArray = true; isObject = false; } var currProp = nesting[i]; if (!nestedRef[currProp]) { if (isArray) { nestedRef[currProp] = []; } else if (isObject) { nestedRef[currProp] = {}; } } nestedRef = nestedRef[currProp]; } else { nestedRef[nestedProp] = values[prop]; } } }

    Read the article

  • NSStringWithFormat Swizzled to allow missing format numbered args

    - by coneybeare
    Based on this SO question asked a few hours ago, I have decided to implement a swizzled method that will allow me to take a formatted NSString as the format arg into stringWithFormat, and have it not break when omitting one of the numbered arg references (%1$@, %2$@) I have it working, but this is the first copy, and seeing as this method is going to be potentially called hundreds of thousands of times per app run, I need to bounce this off of some experts to see if this method has any red flags, major performance hits, or optimizations #define NUMARGS(...) (sizeof((int[]){__VA_ARGS__})/sizeof(int)) @implementation NSString (UAFormatOmissions) + (id)uaStringWithFormat:(NSString *)format, ... { if (format != nil) { va_list args; va_start(args, format); // $@ is an ordered variable (%1$@, %2$@...) if ([format rangeOfString:@"$@"].location == NSNotFound) { //call apples method NSString *s = [[[NSString alloc] initWithFormat:format arguments:args] autorelease]; va_end(args); return s; } NSMutableArray *newArgs = (NSMutableArray *)[NSMutableArray arrayWithCapacity:NUMARGS(args)]; id arg = nil; int i = 1; while (arg = va_arg(args, id)) { NSString *f = (NSString *)[NSString stringWithFormat:@"%%%d\$\@", i]; i++; if ([format rangeOfString:f].location == NSNotFound) continue; else [newArgs addObject:arg]; } va_end(args); char *newArgList = (char *)malloc(sizeof(id) * [newArgs count]); [newArgs getObjects:(id *)newArgList]; NSString* result = [[[NSString alloc] initWithFormat:format arguments:newArgList] autorelease]; free(newArgList); return result; } return nil; } The basic algorithm is: search the format string for the %1$@, %2$@ variables by searching for %@ if not found, call the normal stringWithFormat and return else, loop over the args if the format has a position variable (%i$@) for position i, add the arg to the new arg array else, don't add the arg take the new arg array, convert it back into a va_list, and call initWithFormat:arguments: to get the correct string. The idea is that I would run all [NSString stringWithFormat:] calls through this method instead. This might seem unnecessary to many, but click on to the referenced SO question (first line) to see examples of why I need to do this. Ideas? Thoughts? Better implementations? Better Solutions?

    Read the article

  • NSString stringWithFormat swizzled to allow missing format numbered args

    - by coneybeare
    Based on this SO question asked a few hours ago, I have decided to implement a swizzled method that will allow me to take a formatted NSString as the format arg into stringWithFormat, and have it not break when omitting one of the numbered arg references (%1$@, %2$@) I have it working, but this is the first copy, and seeing as this method is going to be potentially called hundreds of thousands of times per app run, I need to bounce this off of some experts to see if this method has any red flags, major performance hits, or optimizations #define NUMARGS(...) (sizeof((int[]){__VA_ARGS__})/sizeof(int)) @implementation NSString (UAFormatOmissions) + (id)uaStringWithFormat:(NSString *)format, ... { if (format != nil) { va_list args; va_start(args, format); // $@ is an ordered variable (%1$@, %2$@...) if ([format rangeOfString:@"$@"].location == NSNotFound) { //call apples method NSString *s = [[[NSString alloc] initWithFormat:format arguments:args] autorelease]; va_end(args); return s; } NSMutableArray *newArgs = (NSMutableArray *)[NSMutableArray arrayWithCapacity:NUMARGS(args)]; id arg = nil; int i = 1; while (arg = va_arg(args, id)) { NSString *f = (NSString *)[NSString stringWithFormat:@"%%%d\$\@", i]; i++; if ([format rangeOfString:f].location == NSNotFound) continue; else [newArgs addObject:arg]; } va_end(args); char *newArgList = (char *)malloc(sizeof(id) * [newArgs count]); [newArgs getObjects:(id *)newArgList]; NSString* result = [[[NSString alloc] initWithFormat:format arguments:newArgList] autorelease]; free(newArgList); return result; } return nil; } The basic algorithm is: search the format string for the %1$@, %2$@ variables by searching for %@ if not found, call the normal stringWithFormat and return else, loop over the args if the format has a position variable (%i$@) for position i, add the arg to the new arg array else, don't add the arg take the new arg array, convert it back into a va_list, and call initWithFormat:arguments: to get the correct string. The idea is that I would run all [NSString stringWithFormat:] calls through this method instead. This might seem unnecessary to many, but click on to the referenced SO question (first line) to see examples of why I need to do this. Ideas? Thoughts? Better implementations? Better Solutions?

    Read the article

  • Applying Domain Model on top of Linq2Sql entities

    - by Thomas
    I am trying to practice the model first approach and I am putting together a domain model. My requirement is pretty simple: UserSession can have multiple ShoppingCartItems. I should start off by saying that I am going to apply the domain model interfaces to Linq2Sql generated entities (using partial classes). My requirement translates into three database tables (UserSession, Product, ShoppingCartItem where ProductId and UserSessionId are foreign keys in the ShoppingCartItem table). Linq2Sql generates these entities for me. I know I shouldn't even be dealing with the database at this point but I think it is important to mention. The aggregate root is UserSession as a ShoppingCartItem can not exist without a UserSession but I am unclear on the rest. What about Product? It is defiently an entity but should it be associated to ShoppingCartItem? Here are a few suggestion (they might all be incorrect implementations): public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public int ProductId { get; set; } } Another one would be: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public IProduct Product { get; set; } } A third one is: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItemColletion> ShoppingCartItems{ get; set; } } public interface IShoppingCartItemColletion { public IUserSession UserSession { get; set; } public IProduct Product { get; set; } } public interface IProduct { public int ProductId { get; set; } } I have a feeling my mind is too tightly coupled with database models and tables which is making this hard to grasp. Anyone care to decouple?

    Read the article

  • Multiple views and source list in a Core Data app

    - by Ellie P.
    I'm working on my first major Cocoa app for an undergraduate research project. The application is document-based and uses Core Data. One of the entities is an abstract entity, Page. Page is parent of several types of pages: ie PageWithHeaderAndFooter, PageWithTwoColumns, BasicPage etc. Page has attributes, such as title and author, that all pages have in common. Each specific type of page has a certain number of layout blocks (PageWithHeaderAndFooter has three: header, footer, body. BasicPage has one: body. etc.) Additionally, all Page subclasses define layout-specific implementations of certain methods. The other relevant entity is Style, which defines the visual look of a Page. (Think of Pages as HTML and Style as CSS.) I would like my app to have an iTunes/Mail-like source list with sections. (One section would be Pages, the other would be Styles.) I have a pretty good idea how to do the sectioned source list (this was a great help). However, after hours of headbanging and fruitless googling, here's what I can't figure out: Pages and Styles listed in the source list, and when you select one of them, all of the relevant fields for that object appear at the right (mostly NSTextViews, pop up menus, etc). I laid that out and did all of the bindings in Interface Builder. The problem is, if my source list contains different types of pages, how do I get a different view to display at the right depending on the type of page selected? For example, if a BasicPage is selected, I want just what you see above: the general page stuff and one NSTextView that corresponds to the one field body of BasicPage. But if I select a PageWithHeaderAndFooter, I want to display the general page stuff plus three NSTextViews (one for header, body, and footer.) If I have a Style selected, I want to display various pop up menus, color wells, etc. For the pages at least, we're only talking about one or more NSTextViews, each of which corresponds to a String attribute of the respective entity. How would you do this? Thank you for your help!

    Read the article

  • Structuring Win32 GUI code

    - by kraf
    I wish to improve my code and file structure in larger Win32 projects with plenty of windows and controls. Currently, I tend to have one header and one source file for the entire implementation of a window or dialog. This works fine for small projects, but now it has come to the point where these implementations are starting to reach 1000-2000 lines, which is tedious to browse. A typical source file of mine looks like this: static LRESULT CALLBACK on_create(const HWND hwnd, WPARAM wp, LPARAM lp) { setup_menu(hwnd); setup_list(hwnd); setup_context_menu(hwnd); /* clip */ return 0; } static LRESULT CALLBACK on_notify(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp) { const NMHDR* header = (const NMHDR*)lp; /* At this point I feel that the control's event handlers doesn't * necessarily belong in the same source file. Perhaps I could move * each control's creation code and event handlers into a separate * source file? Good practice or cause of confusion? */ switch (header->idFrom) { case IDC_WINDOW_LIST: switch (header->code) { case NM_RCLICK: return on_window_list_right_click(hwnd, wp, lp); /* clip */ } } } static LRESULT CALLBACK wndmain_proc(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp) { switch (msg) { case WM_CREATE: return on_create(hwnd, wp, lp); case WM_CLOSE: return on_close(hwnd, wp, lp); case WM_NOTIFY: return on_notify(hwnd, wp, lp); /* It doesn't matter much how the window proc looks as it just forwards * events to the appropriate handler. */ /* clip */ default: return DefWindowProc(hwnd, msg, wp, lp); } } But now as the window has a lot more controls, and these controls in turn have their own message handlers, and then there's the menu click handlers, and so on... I'm getting lost, and I really need advice on how to structure this mess up in a good and sensible way. I have tried to find good open source examples of structuring Win32 code, but I just get more confused since there are hundreds of files, and within each of these files that seem GUI related, the Win32 GUI code seems so far encapsulated away. And when I finally find a CreateWindowEx statement, the window proc is nowhere to be found. Any advice on how to structure all the code while remaining sane would be greatly appreciated. Thanks! I don't wish to use any libraries or frameworks as I find the Win32 API interesting and valuable for learning. Any insight into how you structure your own GUI code could perhaps serve as inspiration.

    Read the article

  • Binding CoreData Managed Object to NSTextFieldCell subclass

    - by ndg
    I have an NSTableView which has its first column set to contain a custom NSTextFieldCell. My custom NSTextFieldCell needs to allow the user to edit a "desc" property within my Managed Object but to also display an "info" string that it contains (which is not editable). To achieve this, I followed this tutorial. In a nutshell, the tutorial suggests editing your Managed Objects generated subclass to create and pass a dictionary of its contents to your NSTableColumn via bindings. This works well for read-only NSCell implementations, but I'm looking to subclass NSTextFieldCell to allow the user to edit the "desc" property of my Managed Object. To do this, I followed one of the articles comments, which suggests subclassing NSFormatter to explicitly state which Managed Object property you would like the NSTextFieldCell to edit. Here's the suggested implementation: @implementation TRTableDescFormatter - (BOOL)getObjectValue:(id *)anObject forString:(NSString *)string errorDescription:(NSString **)error { if (anObject != nil){ *anObject = [NSDictionary dictionaryWithObject:string forKey:@"desc"]; return YES; } return NO; } - (NSString *)stringForObjectValue:(id)anObject { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; return [anObject valueForKey:@"desc"]; } - (NSAttributedString*)attributedStringForObjectValue:(id)anObject withDefaultAttributes:(NSDictionary *)attrs { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; NSAttributedString *anAttributedString = [[NSAttributedString alloc] initWithString: [anObject valueForKey:@"desc"]]; return anAttributedString; } @end I assign the NSFormatter subclass to my cell in my NSTextFieldCell subclass, like so: - (void)awakeFromNib { TRTableDescFormatter *formatter = [[[TRTableDescFormatter alloc] init] autorelease]; [self setFormatter:formatter]; } This seems to work, but falls down when editing multiple rows. The behaviour I'm seeing is that editing a row will work as expected until you try to edit another row. Upon editing another row, all previously edited rows will have their "desc" value set to the value of the currently selected row. I've been doing a lot of reading on this subject and would really like to get to the bottom of this. What's more frustrating is that my NSTextFieldCell is rendering exactly how I would like it to. This editing issue is my last obstacle! If anyone can help, that would be greatly appreciated.

    Read the article

  • DriverManager always returns my custom driver regardless of the connection URL

    - by JGB146
    I am writing a driver to act as a wrapper around two separate MySQL connections (to distributed databases). Basically, the goal is to enable interaction with my driver for all applications instead of requiring the application to sort out which database holds the desired data. Most of the code for this is in place, but I'm having a problem in that when I attempt to create connections via the MySQL Driver, the DriverManager is returning an instance of my driver instead of the MySQL Driver. I'd appreciate any tips on what could be causing this and what could be done to fix it! Below is a few relevant snippets of code. I can provide more, but there's a lot, so I'd need to know what else you want to see. First, from MyDriver.java: public MyDriver() throws SQLException { DriverManager.registerDriver(this); } public Connection connect(String url, Properties info) throws SQLException { try { return new MyConnection(info); } catch (Exception e) { return null; } } public boolean acceptsURL(String url) throws SQLException { if (url.contains("jdbc:jgb://")) { return true; } return false; } It is my understanding that this acceptsURL function will dictate whether or not the DriverManager deems my driver a suitable fit for a given URL. Hence it should only be passing connections from my driver if the URL contains "jdbc:jgb://" right? Here's code from MyConnection.java: Connection c1 = null; Connection c2 = null; /** *Constructors */ public DDBSConnection (Properties info) throws SQLException, Exception { info.list(System.out); //included for testing Class.forName("com.mysql.jdbc.Driver").newInstance(); String url1 = "jdbc:mysql://server1.com/jgb"; String url2 = "jdbc:mysql://server2.com/jgb"; this.c1 = DriverManager.getConnection( url1, info.getProperty("username"), info.getProperty("password")); this.c2 = DriverManager.getConnection( url2, info.getProperty("username"), info.getProperty("password")); } And this tells me two things. First, the info.list() call confirms that the correct user and password are being sent. Second, because we enter an infinite loop, we see that the DriverManager is providing new instances of my connection as matches for the mysql URLs instead of the desired mysql driver/connection. FWIW, I have separately tested implementations that go straight to the mysql driver using this exact syntax (al beit only one at a time), and was able to successfully interact with each database individually from a test application outside of my driver.

    Read the article

  • In a DDD approach, is this example modelled correctly?

    - by Tag
    Just created an acc on SO to ask this :) Assuming this simplified example: building a web application to manage projects... The application has the following requirements/rules. 1) Users should be able to create projects inserting the project name. 2) Project names cannot be empty. 3) Two projects can't have the same name. I'm using a 4-layered architecture (User Interface, Application, Domain, Infrastructure). On my Application Layer i have the following ProjectService.cs class: public class ProjectService { private IProjectRepository ProjectRepo { get; set; } public ProjectService(IProjectRepository projectRepo) { ProjectRepo = projectRepo; } public void CreateNewProject(string name) { IList<Project> projects = ProjectRepo.GetProjectsByName(name); if (projects.Count > 0) throw new Exception("Project name already exists."); Project project = new Project(name); ProjectRepo.InsertProject(project); } } On my Domain Layer, i have the Project.cs class and the IProjectRepository.cs interface: public class Project { public int ProjectID { get; private set; } public string Name { get; private set; } public Project(string name) { ValidateName(name); Name = name; } private void ValidateName(string name) { if (name == null || name.Equals(string.Empty)) { throw new Exception("Project name cannot be empty or null."); } } } public interface IProjectRepository { void InsertProject(Project project); IList<Project> GetProjectsByName(string projectName); } On my Infrastructure layer, i have the implementation of IProjectRepository which does the actual querying (the code is irrelevant). I don't like two things about this design: 1) I've read that the repository interfaces should be a part of the domain but the implementations should not. That makes no sense to me since i think the domain shouldn't call the repository methods (persistence ignorance), that should be a responsability of the services in the application layer. (Something tells me i'm terribly wrong.) 2) The process of creating a new project involves two validations (not null and not duplicate). In my design above, those two validations are scattered in two different places making it harder (imho) to see whats going on. So, my question is, from a DDD perspective, is this modelled correctly or would you do it in a different way?

    Read the article

  • bmp image header doubts

    - by vikramtheone
    Hi Guys, I'm doing a project where I have to make use of the pixel information of a bmp image. So, I'm gathering the image information by reading the header information of the input .bmp image. I'm quite successful with everything but one thing bothers me, can any one here clarify it? The header information of my .bmp image is as follows (My test image is very tiny and gray scale)- BMP File header File size 1210 Offset information 1078 BMP Information header Image Header Size 40 Image Size 132 Image width 9 Image height 11 Image bits_p_p 8 So, from the .bmp header I see that the image size is 132 (bytes) but when I multiply the width and height it is only 99, how is such a thing possible? I'm confident with 132 bytes because when I subtract the Offset value with the File Size value, I get 132(1210 - 1078 = 132) and also when I manually count the number of bytes (In a HEX editor) from the point 1078 or 436h (End of the offset field), there are exactly 132 bytes of pixel information. So, why is there a disparity between the size filed and the (width x height)? My future implementations are dependent on the image width and height information and not on Image size information. So, I have to understand thoroughly whats going on here. My understanding of the header should be clearly wrong... I guess!!! Help!!! Regards Vikram My bmp structures are a as follows - typedef struct bmpfile_magic { short magic; }BMP_MAGIC_NUMBER; typedef struct bmpfile_header { uint32_t filesz; uint16_t creator1; uint16_t creator2; uint32_t bmp_offset; }BMP_FILE_HEADER; typedef struct { uint32_t header_sz; uint32_t width; uint32_t height; uint16_t nplanes; uint16_t bitspp; uint32_t compress_type; uint32_t bmp_bytesz; uint32_t hres; uint32_t vres; uint32_t ncolors; uint32_t nimpcolors; } BMP_INFO_HEADER;

    Read the article

  • MVVM Binding Orthogonal Aspects in Views e.g. Application Settings

    - by chibacity
    I have an application which I am developing using WPF\Prism\MVVM. All is going well and I have some pleasing MVVM implementations. However, in some of my views I would like to be able to bind application settings e.g. when a user reloads an application, the checkbox for auto-scrolling a grid should be checked in the state it was last time the user used the application. My view needs to bind to something that holds the "auto-scroll" setting state. I could put this on the view-model, but applications settings are orthogonal to the purpose of the view-model. The "auto-scroll" setting is controlling an aspect of the view. This setting is just an example. There will be quite a number of them and splattering my view-models with properties to represent application settings (so I can bind them) feels decidedly yucky. One view-model per view seems to be de rigeuer... What is best\usual practice here? Splatter my view-models with application settings? Have multiple view-models per view so settings can be represented in their own right? Split views so that controls can bind to an ApplicationSettingsViewModel? = too many views? Something else? Edit 1 To add a little more context, I am developing a UI with a tabbed interface. Each tab will host a single widget and there a variety of widgets. Each widget is a Prism composition of individual views. Some views are common amongst widgets e.g. a file picker view. Whilst each widget is composed of several views, as a whole, conceptually a widget has a single set of user settings e.g. last file selected, auto-scroll enabled, etc. These need to be persisted and retrieved\applied when the application starts again, and the widget views are created. My question is focused on the fact that conceptually a widget has a single set of user settings which is at right-angles to the fact that a widget consists of many views. Each view in the widget has it's own view-model (which works nicely and logically) but if I stick to a one view-model per view, I would have to splatter each view-model with user settings appropriate to it. This doesn't sound right ?!?

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Class hierarchy problem (with generic's variance!)

    - by devoured elysium
    The problem: class StatesChain : IState, IHasStateList { private TasksChain tasks = new TasksChain(); ... public IList<IState> States { get { return _taskChain.Tasks; } } IList<ITask> IHasTasksCollection.Tasks { get { return _taskChain.Tasks; } <-- ERROR! You can't do this in C#! I want to return an IList<ITask> from an IList<IStates>. } } Assuming the IList returned will be read-only, I know that what I'm trying to achieve is safe (or is it not?). Is there any way I can accomplish what I'm trying? I wouldn't want to try to implement myself the TasksChain algorithm (again!), as it would be error prone and would lead to code duplication. Maybe I could just define an abstract Chain and then implement both TasksChain and StatesChain from there? Or maybe implementing a Chain<T> class? How would you approach this situation? The Details: I have defined an ITask interface: public interface ITask { bool Run(); ITask FailureTask { get; } } and a IState interface that inherits from ITask: public interface IState : ITask { IState FailureState { get; } } I have also defined an IHasTasksList interface: interface IHasTasksList { List<Tasks> Tasks { get; } } and an IHasStatesList: interface IHasTasksList { List<Tasks> States { get; } } Now, I have defined a TasksChain, that is a class that has some code logic that will manipulate a chain of tasks (beware that TasksChain is itself a kind of ITask!): class TasksChain : ITask, IHasTasksList { IList<ITask> tasks = new List<ITask>(); ... public List<ITask> Tasks { get { return _tasks; } } ... } I am implementing a State the following way: public class State : IState { private readonly TaskChain _taskChain = new TaskChain(); public State(Precondition precondition, Execution execution) { _taskChain.Tasks.Add(precondition); _taskChain.Tasks.Add(execution); } public bool Run() { return _taskChain.Run(); } public IState FailureState { get { return (IState)_taskChain.Tasks[0].FailureTask; } } ITask ITask.FailureTask { get { return FailureState; } } } which, as you can see, makes use of explicit interface implementations to "hide" FailureTask and instead show FailureState property. The problem comes from the fact that I also want to define a StatesChain, that inherits both from IState and IHasStateList (and that also imples ITask and IHasTaskList, implemented as explicit interfaces) and I want it to also hide IHasTaskList's Tasks and only show IHasStateList's States. (What is contained in "The problem" section should really be after this, but I thought puting it first would be way more reader friendly). (pff..long text) Thanks!

    Read the article

  • Constructor versus setter injection

    - by Chris
    Hi, I'm currently designing an API where I wish to allow configuration via a variety of methods. One method is via an XML configuration schema and another method is through an API that I wish to play nicely with Spring. My XML schema parsing code was previously hidden and therefore the only concern was for it to work but now I wish to build a public API and I'm quite concerned about best-practice. It seems that many favor javabean type PoJo's with default zero parameter constructors and then setter injection. The problem I am trying to tackle is that some setter methods implementations are dependent on other setter methods being called before them in sequence. I could write anal setters that will tolerate themselves being called in many orders but that will not solve the problem of a user forgetting to set the appropriate setter and therefore the bean being in an incomplete state. The only solution I can think of is to forget about the objects being 'beans' and enforce the required parameters via constructor injection. An example of this is in the default setting of the id of a component based on the id of the parent components. My Interface public interface IMyIdentityInterface { public String getId(); /* A null value should create a unique meaningful default */ public void setId(String id); public IMyIdentityInterface getParent(); public void setParent(IMyIdentityInterface parent); } Base Implementation of interface: public abstract class MyIdentityBaseClass implements IMyIdentityInterface { private String _id; private IMyIdentityInterface _parent; public MyIdentityBaseClass () {} @Override public String getId() { return _id; } /** * If the id is null, then use the id of the parent component * appended with a lower-cased simple name of the current impl * class along with a counter suffix to enforce uniqueness */ @Override public void setId(String id) { if (id == null) { IMyIdentityInterface parent = getParent(); if (parent == null) { // this may be the top level component or it may be that // the user called setId() before setParent(..) } else { _id = Helpers.makeIdFromParent(parent,getClass()); } } else { _id = id; } } @Override public IMyIdentityInterface getParent() { return _parent; } @Override public void setParent(IMyIdentityInterface parent) { _parent = parent; } } Every component in the framework will have a parent except for the top level component. Using the setter type of injection, then the setters will have different behavior based on the order of the calling of the setters. In this case, would you agree, that a constructor taking a reference to the parent is better and dropping the parent setter method from the interface entirely? Is it considered bad practice if I wish to be able to configure these components using an IoC container? Chris

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Parsing language for both binary and character files

    - by Thorsten S.
    The problem: You have some data and your program needs specified input. For example strings which are numbers. You are searching for a way to transform the original data in a format you need. And the problem is: The source can be anything. It can be XML, property lists, binary which contains the needed data deeply embedded in binary junk. And your output format may vary also: It can be number strings, float, doubles.... You don't want to program. You want routines which gives you commands capable to transform the data in a form you wish. Surely it contains regular expressions, but it is very good designed and it offers capabilities which are sometimes much more easier and more powerful. Something like a super-grep which you can access (!) as program routines, not only as tool. It allows: joining/grouping/merging of results inserting/deleting/finding/replacing write macros which allows to execute a command chain repeatedly meta-grouping (lists-tables-hypertables) Example (No, I am not looking for a solution to this, it is just an example): You want to read xml strings embedded in a binary file with variable length records. Your tool reads the record length and deletes the junk surrounding your text. Now it splits open the xml and extracts the strings. Being Indian number glyphs and containing decimal commas instead of decimal points, your tool transforms it into ASCII and replaces commas with points. Now the results must be stored into matrices of variable length....etc. etc. I am searching for a good language / language-design and if possible, an implementation. Which design do you like or even, if it does not fulfill the conditions, wouldn't you want to miss ? EDIT: The question is if a solution for the problem exists and if yes, which implementations are available. You DO NOT implement your own sorting algorithm if Quicksort, Mergesort and Heapsort is available. You DO NOT invent your own text parsing method if you have regular expressions. You DO NOT invent your own 3D language for graphics if OpenGL/Direct3D is available. There are existing solutions or at least papers describing the problem and giving suggestions. And there are people who may have worked and experienced such problems and who can give ideas and suggestions. The idea that this problem is totally new and I should work out and implement it myself without background knowledge seems for me, I must admit, totally off the mark.

    Read the article

  • Hide a base class method from derived class, but still visible outside of assembly

    - by clintp
    This is a question about tidyness. The project is already working, I'm satisfied with the design but I have a couple of loose ends that I'd like to tie up. My project has a plugin architecture. The main body of the program dispatches work to the plugins that each reside in their own AppDomain. The plugins are described with an interface, which is used by the main program (to get the signature for invoking DispatchTaskToPlugin) and by the plugins themselves as an API contract: namespace AppServer.Plugin.Common { public interface IAppServerPlugin { void Register(); void DispatchTaskToPlugin(Task t); // Other methods omitted } } In the main body of the program Register() is called so that the plugin can register its callback methods with the base class, and then later DispatchTaskToPlugin() is called to get the plugin running. The plugins themselves are in two parts. There's a base class that implements the framework for the plugin (setup, housekeeping, teardown, etc..). This is where DispatchTaskToPlugin is actually defined: namespace AppServer.Plugin { abstract public class BasePlugin : MarshalByRefObject, AppServer.Plugin.Common.IAppServerPlugin { public void DispatchTaskToPlugin(Task t) { // ... // Eventual call to actual plugin code // } // Other methods omitted } } The actual plugins themselves only need to implement a Register() method (to give the base class the delegates to call eventually) and then their business logic. namespace AppServer.Plugin { public class Plugin : BasePlugin { override public void Register() { // Calls a method in the base class to register itself. } // Various callback methods, business logic, etc... } } Now in the base class (BasePlugin) I've implemented all kinds of convenience methods, collected data, etc.. for the plugins to use. Everything's kosher except for that lingering method DispatchTaskToPlugin(). It's not supposed to be callable from the Plugin class implementations -- they have no use for it. It's only needed by the dispatcher in the main body of the program. How can I prevent the derived classes (Plugin) from seeing the method in the base class (BasePlugin/DispatchTaskToPlugin) but still have it visible from outside of the assembly? I can split hairs and have DispatchTaskToPlugin() throw an exception if it's called from the derived classes, but that's closing the barn door a little late. I'd like to keep it out of Intellisense or possibly have the compiler take care of this for me. Suggestions?

    Read the article

  • Interoperability between two AES algorithms

    - by lpfavreau
    Hello, I'm new to cryptography and I'm building some test applications to try and understand the basics of it. I'm not trying to build the algorithms from scratch but I'm trying to make two different AES-256 implementation talk to each other. I've got a database that was populated with this Javascript implementation stored in Base64. Now, I'm trying to get an Objective-C method to decrypt its content but I'm a little lost as to where the differences in the implementations are. I'm able to encrypt/decrypt in Javascript and I'm able to encrypt/decrypt in Cocoa but cannot make a string encrypted in Javascript decrypted in Cocoa or vice-versa. I'm guessing it's related to the initialization vector, nonce, counter mode of operation or all of these, which quite frankly, doesn't speak to me at the moment. Here's what I'm using in Objective-C, adapted mainly from this and this: @implementation NSString (Crypto) - (NSString *)encryptAES256:(NSString *)key { NSData *input = [self dataUsingEncoding: NSUTF8StringEncoding]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:TRUE]; return [Base64 encode:output]; } - (NSString *)decryptAES256:(NSString *)key { NSData *input = [Base64 decode:self]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:FALSE]; return [[[NSString alloc] initWithData:output encoding:NSUTF8StringEncoding] autorelease]; } + (NSData *)cryptoAES256:(NSData *)input key:(NSString *)key doEncrypt:(BOOL)doEncrypt { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [input length]; // See the doc: For block ciphers, the output size will always be less than or // equal to the input size plus the size of one block. // That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesCrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(doEncrypt ? kCCEncrypt : kCCDecrypt, kCCAlgorithmAES128, kCCOptionECBMode | kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, nil, // initialization vector (optional) [input bytes], dataLength, // input buffer, bufferSize, // output &numBytesCrypted ); if (cryptStatus == kCCSuccess) { // the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesCrypted]; } free(buffer); // free the buffer; return nil; } @end Of course, the input is Base64 decoded beforehand. I see that each encryption with the same key and same content in Javascript gives a different encrypted string, which is not the case with the Objective-C implementation that always give the same encrypted string. I've read the answers of this post and it makes me believe I'm right about something along the lines of vector initialization but I'd need your help to pinpoint what's going on exactly. Thank you!

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • polymorphism, inheritance in c# - base class calling overridden method?

    - by Andrew Johns
    This code doesn't work, but hopefully you'll get what I'm trying to achieve here. I've got a Money class, which I've taken from http://www.noticeablydifferent.com/CodeSamples/Money.aspx, and extended it a little to include currency conversion. The implementation for the actual conversion rate could be different in each project, so I decided to move the actual method for retrieving a conversion rate (GetCurrencyConversionRate) into a derived class, but the ConvertTo method contains code that would work for any implementation assuming the derived class has overriden GetCurrencyConversionRate so it made sense to me to keep it in the parent class? So what I'm trying to do is get an instance of SubMoney, and be able to call the .ConvertTo() method, which would in turn use the overriden GetCurrencyConversionRate, and return a new instance of SubMoney. The problem is, I'm not really understanding some concepts of polymorphism and inheritance yet, so not quite sure what I'm trying to do is even possible in the way I think it is, as what is currently happening is that I end up with an Exception where it has used the base GetCurrencyConversionRate method instead of the derived one. Something tells me I need to move the ConvertTo method down to the derived class, but this seems like I'll be duplicating code in multiple implementations, so surely there's a better way? public class Money { public CurrencyConversionRate { get { return GetCurrencyConversionRate(_regionInfo.ISOCurrencySymbol); } } public static decimal GetCurrencyConversionRate(string isoCurrencySymbol) { throw new Exception("Must override this method if you wish to use it."); } public Money ConvertTo(string cultureName) { // convert to base USD first by dividing current amount by it's exchange rate. Money someMoney = this; decimal conversionRate = this.CurrencyConversionRate; decimal convertedUSDAmount = Money.Divide(someMoney, conversionRate).Amount; // now convert to new currency CultureInfo cultureInfo = new CultureInfo(cultureName); RegionInfo regionInfo = new RegionInfo(cultureInfo.LCID); conversionRate = GetCurrencyConversionRate(regionInfo.ISOCurrencySymbol); decimal convertedAmount = convertedUSDAmount * conversionRate; Money convertedMoney = new Money(convertedAmount, cultureName); return convertedMoney; } } public class SubMoney { public SubMoney(decimal amount, string cultureName) : base(amount, cultureName) {} public static new decimal GetCurrencyConversionRate(string isoCurrencySymbol) { // This would get the conversion rate from some web or database source decimal result = new Decimal(2); return result; } }

    Read the article

  • markdown to HTML with customised WMD editor

    - by spirytus
    For my application I customized slightly the way WMD behaves so when user enters empty lines, these are reflected in HTML output as <br />'s. Now I came to a point when I should store it somewhere at backend and so after going thru SO posts for a while I'm not sure what is the best way to do it. I have few options and if you could point out which their pros/cons that would be much appreciated. send to server and store as markdown rather than HTML. To me obvious advantage would be keeping exactly same formatting as user originally entered. But then how can I convert it back to HTML for display to a client? It seems very troublesome to convert it on client side as even if it would be possible what would happen if JS would be disabled? If I wanted to do it on the server, then standard server side implementations of markup to HTML might be resource expensive. Would that be an issue in your opinion? Even if it wouldn't be the case then as I mentioned my WMD implementation is customised and those server side solutions wouldn't probably do the right conversion to markdown anyway and there always would be a risk that something would convert wrong. Send to server as converted HTML. Same as above.. conversion on client side would be difficult, server side same with possibility of getting it wrong. send original markdown and converted HTML and store both. No performance issues related to converting markdown to HTML on client side, nor on server side. Users would have always same markdown they originally entered and same HTML they originally saw in preview (possibly sanitized in php though). It would have to take twice that much storage space though and that is my biggest worry. I tend to lean towards 3rd solution as it seems simplest, but there is a worry of doubled storage space needed for this solution. Please bear in mind that my implementation of WMD is slightly modified and also I'm going with PHP/MySql server side implementation. So apart from 3 options I listed above, are there any other possible solutions to my problem? Did I miss anything important that would make one of the options above better then the rest? And what other pros/cons would apply to each solution I listed? Also how is it implemented on SO? I read somwhere that they using option 3, and so if its good enough for SO would be good enough for me :) but not sure if its true anyway, so how is it done? Also please forgive me, but at least for once I got to say that StackOverflow IS THE BEST DAMN RESOURCE ON THE WEB and I truly appreciate all the people trying to help others here! The site and users here are simply amazing!

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • Solving a cyclical dependency in Ninject (Compact Framework)

    - by Alex
    I'm trying to use Ninject for dependency injection in my MVP application. However, I have a problem because I have two types that depend on each other, thus creating a cyclic dependency. At first, I understand that it was a problem, because I had both types require each other in their constructors. Therefore, I moved one of the dependencies to a property injection instead, but I'm still getting the error message. What am I doing wrong? This is the presenter: public class LoginPresenter : Presenter<ILoginView>, ILoginPresenter { public LoginPresenter( ILoginView view ) : base( view ) { } } and this is the view: public partial class LoginForm : Form, ILoginView { [Inject] public ILoginPresenter Presenter { private get; set; } public LoginForm() { InitializeComponent(); } } And here's the code that causes the exception: static class Program { /// <summary> /// The main entry point for the application. /// </summary> [MTAThread] static void Main() { // Show the login form Views.LoginForm loginForm = Kernel.Get<Views.Interfaces.ILoginView>() as Views.LoginForm; Application.Run( loginForm ); } } The exception happens on the line with the Kernel.Get<>() call. Here it is: Error activating ILoginPresenter using binding from ILoginPresenter to LoginPresenter A cyclical dependency was detected between the constructors of two services. Activation path: 4) Injection of dependency ILoginPresenter into property Presenter of type LoginForm 3) Injection of dependency ILoginView into parameter view of constructor of type LoginPresenter 2) Injection of dependency ILoginPresenter into property Presenter of type LoginForm 1) Request for ILoginView Suggestions: 1) Ensure that you have not declared a dependency for ILoginPresenter on any implementations of the service. 2) Consider combining the services into a single one to remove the cycle. 3) Use property injection instead of constructor injection, and implement IInitializable if you need initialization logic to be run after property values have been injected. Why doesn't Ninject understand that since one is constructor injection and the other is property injection, this can work just fine? I even read somewhere looking for the solution to this problem that Ninject supposedly gets this right as long as the cyclic dependency isn't both in the constructors. Apparently not, though. Any help resolving this would be much appreciated.

    Read the article

  • Java DriverManager Always Assigns My Driver

    - by JGB146
    I am writing a driver to act as a wrapper around two separate MySQL connections (to distributed databases). Basically, the goal is to enable interaction with my driver for all applications instead of requiring the application to sort out which database holds the desired data. Most of the code for this is in place, but I'm having a problem in that when I attempt to create connections via the MySQL Driver, the DriverManager is returning an instance of my driver instead of the MySQL Driver. I'd appreciate any tips on what could be causing this and what could be done to fix it! Below is a few relevant snippets of code. I can provide more, but there's a lot, so I'd need to know what else you want to see. First, from MyDriver.java: public MyDriver() throws SQLException { DriverManager.registerDriver(this); } public Connection connect(String url, Properties info) throws SQLException { try { return new MyConnection(info); } catch (Exception e) { return null; } } public boolean acceptsURL(String url) throws SQLException { if (url.contains("jdbc:jgb://")) { return true; } return false; } It is my understanding that this acceptsURL function will dictate whether or not the DriverManager deems my driver a suitable fit for a given URL. Hence it should only be passing connections from my driver if the URL contains "jdbc:jgb://" right? Here's code from MyConnection.java: Connection c1 = null; Connection c2 = null; /** *Constructors */ public DDBSConnection (Properties info) throws SQLException, Exception { info.list(System.out); //included for testing Class.forName("com.mysql.jdbc.Driver").newInstance(); String url1 = "jdbc:mysql://server1.com/jgb"; String url2 = "jdbc:mysql://server2.com/jgb"; this.c1 = DriverManager.getConnection( url1, info.getProperty("username"), info.getProperty("password")); this.c2 = DriverManager.getConnection( url2, info.getProperty("username"), info.getProperty("password")); } And this tells me two things. First, the info.list() call confirms that the correct user and password are being sent. Second, because we enter an infinite loop, we see that the DriverManager is providing new instances of my connection as matches for the mysql URLs instead of the desired mysql driver/connection. FWIW, I have separately tested implementations that go straight to the mysql driver using this exact syntax (al beit only one at a time), and was able to successfully interact with each database individually from a test application outside of my driver.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >