Search Results

Search found 40870 results on 1635 pages for 'database design'.

Page 392/1635 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • How to fluent-map this (using fluent nhibernate)?

    - by vikasde
    I have two tables in my database "Styles" and "BannedStyles". They have a reference via the ItemNo. Now styles can be banned per store. So if style x is banned at store Y then its very possible that its not banned at store Z or vice verse. What is the best way now to map this to a single entity? Should I be mapping this to a single entity? My Style entity looks like this: public class Style { public virtual int ItemNo { get; set;} public virtual string SKU { get; set; } public virtual string StyleName { get; set; } public virtual string Description { get; set; } public virtual Store Store { get; set; } public virtual bool IsEntireStyleBanned { get; set; } }

    Read the article

  • Java library class to handle scheduled execution of "callbacks"?

    - by Hanno Fietz
    My program has a component - dubbed the Scheduler - that lets other components register points in time at which they want to be called back. This should work much like the Unix cron service, i. e. you tell the Scheduler "notify me at ten minutes past every full hour". I realize there are no real callbacks in Java. Here's my approach, is there a library which already does this stuff? Feel free to suggest improvements, too. Register call to Scheduler passes: a time specification containing hour, minute, second, year month, dom, dow, where each item may be unspecified, meaning "execute it every hour / minute etc." (just like crontabs) an object containing data that will tell the calling object what to do when it is notified by the Scheduler. The Scheduler does not process this data, just stores it and passes it back upon notification. a reference to the calling object Upon startup, or after a new registration request, the Scheduler starts with a Calendar object of the current system time and checks if there are any entries in the database that match this point in time. If there are, they are executed and the process starts over. If there aren't, the time in the Calendar object is incremented by one second and the entreis are rechecked. This repeats until there is one entry or more that match(es). (Discrete Event Simulation) The Scheduler will then remember that timestamp, sleep and wake every second to check if it is already there. If it happens to wake up and the time has already passed, it starts over, likewise if the time has come and the jobs have been executed. Edit: Thanks for pointing me to Quartz. I'm looking for something much smaller, however.

    Read the article

  • Primary key/foreign Key naming convention

    - by Jeremy
    In our dev group we have a raging debate regarding the naming convention for Primary and Foreign Keys. There's basically two schools of thought in our group: 1) Primary Table (Employee) Primary Key is called ID Foreign table (Event) Foreign key is called EmployeeID 2) Primary Table (Employee) Primary Key is called EmployeeID Foreign table (Event) Foreign key is called EmployeeID I prefer not to duplicate the name of the table in any of the columns (So I prefer option 1 above). Conceptually, it is consisted with a lot of the recommended practices in other languages, where you don't use the name of the object in its property names. I think that naming the foreign key EmployeeID (or Employee_ID might be better) tells the reader that it is the ID column of the Employee Table. Some others prefer option 2 where you name the primary key prefixed with the table name so that the column name is the same throughout the database. I see that point, but you now can not visually distinguish a primary key from a foreign key. Also, I think it's redundant to have the table name in the column name, because if you think of the table as an entity and a column as a property or attribute of that entity, you think of it as the ID attribute of the Employee, not the EmployeeID attribute of an employee. I don't go an ask my coworker what his PersonAge or PersonGender is. I ask him what his Age is. So like I said, it's a raging debate and we go on and on and on about it. I'm interested to get some new perspective.

    Read the article

  • Decentralized synchronized secure data storage

    - by Alberich
    Introduction Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not. The idea Suppose I have a database structure, in MySql. I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content) Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world) And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones. Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required. My auto-suggestion This is how I am thinking to manage it: Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed. Problem 1: the "neighbour copy" could be outdated too. Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much). Problem 2: Real-time global synchronization is not active. What if... Someone at CLONE_ENTERPRISING inserts a row into TABLE. ... this row goes to every clone ... Someone at CLONE_FIXEMALL deletes this row. ... and at the same time, somewhere in an outdated clone ... Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones) Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance. Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem. Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore. I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot. PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)

    Read the article

  • Is there any way to provide custom factory for .Net Framework creation Entities from EF4 ?

    - by ILICH
    There are a lot of posts about how cool POCO objects are and how Entity Framework 4 supports them. I decided to try it out with domain driven development oriented architecture and finished with domain entities that has dependencies from services. So far so good. Imagine my Products are POCO objects. When i query for objects like this: NorthwindContext db = new NorthwindContext(); var products = db.Products.ToList(); EF creates instances of products for me. Now I want to inject dependencies in my POCO objects (products) The only way I see is make some method within NorthwindContext that makes something like pseudo-code below: public List<Product> GetProducts(){ var products = database.Products.ToList(); container.BuildUp(products); //inject dependencies return products; } But what if i want to make my repository to be more flexible like this: public ObjectSet<Product> GetProducts() { ... } So, I really need a factory to make it more lazy and linq friendly. Please help !

    Read the article

  • Guid Primary /Foreign Key dilemma SQL Server

    - by Xience
    Hi guys, I am faced with the dilemma of changing my primary keys from int identities to Guid. I'll put my problem straight up. It's a typical Retail management app, with POS and back office functionality. Has about 100 tables. The database synchronizes with other databases and receives/ sends new data. Most tables don't have frequent inserts, updates or select statements executing on them. However, some do have frequent inserts and selects on them, eg. products and orders tables. Some tables have upto 4 foreign keys in them. If i changed my primary keys from 'int' to 'Guid', would there be a performance issue when inserting or querying data from tables that have many foreign keys. I know people have said that indexes will be fragmented and 16 bytes is an issue. Space wouldn't be an issue in my case and apparently index fragmentation can also be taken care of using 'NEWSEQUENTIALID()' function. Can someone tell me, from there experience, if Guid will be problematic in tables with many foreign keys. I'll be much appreciative of your thoughts on it...

    Read the article

  • What are the responsibilities of the data layer?

    - by alimac83
    I'm working on a project where I had to add a data layer to my application. I've always thought that the data layer is purely responsible for CRUD functions ie. shouldn't really contain any logic but should simply retrieve data for the business layer to manipulate. However I'm a little confused with my project because I'm not sure whether I've structured my app correctly for this scenario. Basically I'm trying to retrieve a list of products from the database that fall within a certain pricing threshold. At the moment I have a function in my data layer that basically returns all products where price min threshold and price < max threshold. But it got me thinking that maybe this is incorrect. Should the data layer simply return a list of ALL products and then the business logic do the filtering? I'm pretty confused over whether the data layer should simply provide methods that allow the business layer to get raw data or whether it should be responsible for getting filtered data too? If anyone has an article or something explaining this in detail it'd be very helpful. Thanks

    Read the article

  • Architecting ASP.net MVC App to use repositories and services

    - by zaladane
    Hello, I recently started reading about ASP.net MVC and after getting excited about the concept, i started to migrate all my webform project to MVC but i am having a hard time keeping my controller skinny even after following all the good advices out there (or maybe i just don't get it ... ). The website i deal with has Articles, Videos, Quotes ... and each of these entities have categories, comments, images that can be associated with it. I am using Linq to sql for database operations and for each of these Entities, i have a Repository, and for each repository, i create a service to be used in the controller. so i have - ArticleRepository ArticleCategoryRepository ArticleCommentRepository and the corresponding service ArticleService ArticleCategoryService ... you see the picture. The problem i have is that i have one controller for article,category and comment because i thought that having ArticleController handle all of that might make sense, but now i have to pass all of the services needed to the Controller constructor. So i would like to know what it is that i am doing wrong. Are my services not designed properly? should i create Bigger service to encapsulate smaller services and use them in my controller? or should i have an articleCategory Controller and an articleComment Controller? A page viewed by the user is made of all of that, thee article to be viewed,the comments associated with it, a listing of the categories to witch it applies ... how can i efficiently break down the controller to keep it "skinny" and solve my headache? Thank you! I hope my question is not too long to be read ...

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • In M-V-VM where does my code go?

    - by Nate Bross
    So, this is a pretty basic question I hope. I have a web service that I've added through Add Service Reference. It has some methods to get list and get detail of a perticular table in my database. What I'm trying to do is setup a UI as follows: App Load Load service proxy Call the GetList(); method display the results in a ListBox control User Double Clicks item in ListBox, display a modal dialog with a "detail" view I'm extremely new to using MVVM, so any help would be greatly appreciated. Additional information: // Service Interface (simplification): interface IService { IEnumerable<MyObject> GetList(); MyObject GetDetail(int id); } // Data object (simplification) class MyObject { public int ID { get; set; } public string Name { get; set; } } I'm thinking I should have something like this: MainWindow MyObjectViewUserControl Displays list Opens modal window on double click Specific Questions: What would my ViewModel class look like? Where does the code to handle the double click go? Inside the UserControl? Sorry for the long details, but I'm very new to the whole thing and I'm not educated enough to ask the right questions. I checked out the MVVM Sample from wpf.codeplex.com and something isn't quite clicking for me yet, because it seems very confusing.

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

  • Javascript libraries + JQuery plugins contradict? How to debug?

    - by Metafaniel
    This is somewhat a newbie question... I effort everyday to learn, so please understand ;) I'm not the very best expert, but I can do a decent job good looking and functional websites or web applications. My main tools are PHP5, HTML5, CSS2 y 3, a database (SQLite, MySQL) and Javascript and JQuery. I'm not an expert at all in Javascript. I often find interesting JQuery plugins or tutorials and try to mix them up to do the functionality needed. This time I'm mixing maybe too much plugins and js files from different sources. In fact, my app do what I want except for certain behaviors... There are no errors, everything looks fine, but the misbehavior persists. So maybe I need to specify a class I don't know about, or one contradicts another one from another plugin and I just can't understand, for example, why a <button type="button">DON'T submit</button> just submits the form... Anyway, my point is: Do you people know a way to debug this situations??? Is there a generic tool, suggestion, workflow or something to help me understand conflicts or omissions between libraries or plugins??? (Javascript libraries, my own Javascripts and JQuery plugins)??? I hope it is a way! THANKS A LOT FOR YOUR HELP AND COMPREHENSION! =)

    Read the article

  • Pattern for version-specific implementations of a Java class

    - by Mike Monkiewicz
    So here's my conundrum. I am programming a tool that needs to work on old versions of our application. I have the code to the application, but can not alter any of the classes. To pull information out of our database, I have a DTO of sorts that is populated by Hibernate. It consumes a data object for version 1.0 of our app, cleverly named DataObject. Below is the DTO class. public class MyDTO { private MyWrapperClass wrapper; public MyDTO(DataObject data) { wrapper = new MyWrapperClass(data); } } The DTO is instantiated through a Hibernate query as follows: select new com.foo.bar.MyDTO(t1.data) from mytable t1 Now, a little logic is needed on top of the data object, so I made a wrapper class for it. Note the DTO stores an instance of the wrapper class, not the original data object. public class MyWrapperClass { private DataObject data; public MyWrapperClass(DataObject data) { this.data = data; } public String doSomethingImportant() { ... version-specific logic ... } } This works well until I need to work on version 2.0 of our application. Now DataObject in the two versions are very similar, but not the same. This resulted in different sub classes of MyWrapperClass, which implement their own version-specific doSomethingImportant(). Still doing okay. But how does myDTO instantiate the appropriate version-specific MyWrapperClass? Hibernate is in turn instantiating MyDTO, so it's not like I can @Autowire a dependency in Spring. I would love to reuse MyDTO (and my dozens of other DTOs) for both versions of the tool, without having to duplicate the class. Don't repeat yourself, and all that. I'm sure there's a very simple pattern I'm missing that would help this. Any suggestions?

    Read the article

  • SELECT product from subclass: How many queries do I need?

    - by Stefano
    I am building a database similar to the one described here where I have products of different type, each type with its own attributes. I report a short version for convenience product_type ============ product_type_id INT product_type_name VARCHAR product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id ... (common attributes to all product) magazine ======== magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id ... (magazine-specific attributes) web_site ======== web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id ... (web-site specific attributes) This way I do not need to make a huge table with a column for each attribute of different product types (most of which will then be NULL) How do I SELECT a product by product.product_id and see all its attributes? Do I have to make a query first to know what type of product I am dealing with and then, through some logic, make another query to JOIN the right tables? Or is there a way to join everything together? (if, when I retrieve the information about a product_id there are a lot of NULL, it would be fine at this point). Thank you

    Read the article

  • Can Drupal Taxonomy module be used to categorize court records and briefs?

    - by DKinzer
    I'm currently working on project that involves moving a database of documents for court records and briefs over to a Drupal environment. One of the problems that we are faced with is how to index these documents. In our court district, records and briefs all have a docket number which is assigned to a case. The interesting thing is that when multiple cases merge the docket numbers associated to the case become synonymous: Case 1, documents have Doceket No. A Case 2, documents have Docket No. B If case Cases 1 and Case 2 merge, then Docket No. A = Docket No. B My first inclination is to create Docket Vocabulary and have the terms of this Taxonomy be the docket numbers. I am hoping to take advantage of the fact that terms can be synonymous. I understand that there are several functions in the Taxonomy module that I may be able to take advantage, of including: taxonomy_get_synonyms taxonomy_get_related But I'm having problems convincing my collegues that this is the way to go, and frankly I'm not certain it's the right solution either. If anyone has had a similar issue and can offer some guidance as to how to move forward, I would greatly appreciate it. Thanks! D I've asked a related question (which I would also need to answer in order to move forward with this solution): http://stackoverflow.com/questions/2656247/can-drupal-terms-in-different-taxonomies-be-synonymous

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • Separation of business logic

    - by bruno
    When I was optimizing my architecture of our applications in our website, I came to a problem that I don't know the best solution for. Now at the moment we have a small dll based on this structure: Database <-> DAL <-> BLL the Dal uses Business Objects to pass to the BLL that will pass it to the applications that uses this dll. Only the BLL is public so any application that includes this dll, can see the bll. In the beginning, this was a good solution for our company. But when we are adding more and more applications on that Dll, the bigger the Bll is getting. Now we dont want that some applications can see Bll-logic from other applications. Now I don't know what the best solution is for that. The first thing I thought was, move and separate the bll to other dll's which i can include in my application. But then must the Dal be public, so the other dll's can get the data... and that I seems like a good solution. My other solution, is just to separate the bll in different namespaces, and just include only the namespaces you need in the applications. But in this solution, you can get directly access to other bll's if you want. So I'm asking for your opinions.

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • Updating a composite primary key

    - by VBCSharp
    I am struggling with the philosophical discussions about whether or not to use composite primary keys on my SQL Server database. I have always used the surrogate keys in the past and I am challenging myself by leaving my comfort zone to try something different. I have read many discussion but can't come to any kind of solution yet. The struggle I am having is when I have to update a record with the composite PK. For example, the record in questions is like this: ContactID, RoleID, EffectiveDate, TerminationDT. The PK in this case is the ContactID, RoleID, and EffectiveDate. TerminationDT can be null. If in my UI, the user changes the RoleID and then I need to update the record. Using the surrogate key I can do an Update Table Set RoleID = 1 WHERE surrogateID = Z. However, using the Composite Key way, once one of the fields in the composite key changes I have no way to reference the old record to update it without now maintaining somewhere in the UI a reference to the old values. I do not bind datasources in my UI. I open a connection, get the data and store it in a bucket, then close the connection. What are everyone's opinions? Thanks.

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • What considerations should be made when creating a reporting framework for a business?

    - by Andrew Dunaway
    It's a pretty classic problem. The company I work for has numerous business reports that are used to track sales, data feeds, and various other metrics. Of course this also means that there is a conglomerate of disparate frameworks, ASP.net pages, and areas where these reports can be found. There have been some attempts at consolidating these into a single entity, but nothing has stuck yet. Since this is a common problem, and I am sure solved innumerable times, I wanted to see what others have done. For the most part these can be boiled down to the following pieces: A SQL query against our database to gather data A presentation of data, generally in a data grid Filtering that can vary based on data types and the business needs Some way to organize the reports, a single drop down gets long and unmanageable quickly A method to download data to alter further, perhaps a csv file My first thought was to create a framework in Silverlight with Linq to Sql. Mainly just because I like it and want to play with it which probably is not the best reason. I also thought the controls grant a lot of functionality like sorting, dragging columns, etc. I was also curious about the printing in Silverlight 4. Which brings me around to my original question, what is the best way to do this? Is there a package out there I can just buy that will do it for me? The Silverlight approach seems pretty easy, after it's setup and templated, but maybe it's a bad idea and I can learn from someone else?

    Read the article

  • What is the most challenging part of a project like MyLifeBits

    - by kennyzx
    I have made a small utility to keep track of my daily expenses, and my coffee consumption (I always want to quit coffee but never succeed), and of course, this kind of utility is quite simple, just involving an xml file and a few hundred lines of code to manipulate the file. Then I find this project MyLifeBits, it is very intesting, but I think it should require a lot of effort to achieve its goal- that is, to record everything about a person that can be digitally record. So I wonder, it is possible to write an advanced version of my own utility - but a tiny version of MyLifeBits, that can capture: Every webpage I've read, no matter what browser I am using, just download its contents for offline reading, Auto archive emails/documents/notes that I edited, Auto archive Codes that I run/written. Well, these are basically what I do on my PC. And the captured records can be searched easily. My question is, What do you think is the most challenging part? Interoperating with Visual Studio/Office/Lotes Notes/Web browsers is one thing, Database is another thing given that "everything" is record, Advanced programming patterns since it is not a "toy project"? And others that I have overlooked but can be very difficult to handle?

    Read the article

  • OOP beginner: classB extends classA. classA already object. method in classB needed.. etc.

    - by Yvo
    Hey guys, I'm learning myself to go from function based PHP coding to OOP. And this is the situation: ClassA holds many basic tool methods (functions). it's __construct makes a DB connection. ClassB holds specific methods based on a certain activity (extract widgets). ClassB extends ClassA because it uses some of the basic tools in there e.g. a database call. In a php file I create a $a_class = new ClassA object (thus a new DB connection). Now I need a method in ClassB. I do $b_class = new ClassB; and call a method, which uses a method from it's parent:: ClassA. In this example, i'm having ClassA 'used' twice. Onces as object, and onces via a parent:: call, so ClassA creates another DB connection (or not?). So what is the best setup for this basic classes parent, child (extend) situation? I only want to make one connection of course? I don't like to forward the object to ClassB like this $b_class = new ClassB($a_object); or is that the best way? Thanks for thinking with me, and helping :d

    Read the article

  • Cannot add SourceSafe Database as Visual Studio 2010 source control.

    - by CletusLoomis
    My issue is that I cannot add SourceSafe Database for source control within Visual Studio 2010. Our team was initially using VSS for source control in Visual Studio 2010. During an evaluation of TFS, I switched my source control to TFS. It will be a few weeks before a decision is made on TFS, so I needed to switch my source control back to VSS. However I'm now unable to add a SourceSafe Database in Visual Studio. Steps to Reproduce in Visual Studio 2010: 1) Access the 'Open SourceSafe Database' form via Tools-Options-Source Control-Plug-in Settings--Advanced or via File-Source Control 2) The list of available database is blank so I choose 'Browse'. 3) I browse to the srcsafe.ini file for my VSS database and select it. 4) I'm promted to confirm the Database Name - Click OK. 5) The database does not appear in the 'Open SourceSafe' Database form. The list of available databases is still blank. Note that I can add the database fine outside of Visual Studio using VSS directly. However the databases I add via VSS do not appear in the Visual Studio forms. I'm suspicious that this is related to "down-grading" from TFS to VSS which may not have been heavily tested at MS. Any assistance is appreciated.

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >