Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 333/405 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • ASP.NET MVC2 and MemberShipProvider: How well do they go together?

    - by Sparhawk
    I have an existing ASP.NET application with lots of users and a large database. Now I want to have it in MVC 2. I do not want to migrate, I do it more or less from scratch. The database I want to keep and not touch too much. I already have my database tables and I also want to keep my LINQ to SQL-Layer. I didn't use a MembershipProvider in my current implementation (in ASP.NET 1.0 that wasn't strongly supported). So, either I write my own Membershipprovider to meet the needs of my database and app or I don't use the membershipprovider at all. I'd like to understand the consequences if I don't use the membership provider. What is linked to that? I understand that in ASP.NET the Login-Controls are linked to the provider. The AccountModel which is automatically generated with MVC2 could easily be changed to support my existing logic. What happens when a user is identified by a an AuthCookie? Does MVC use the MembershipProvider then? Am I overlooking something? I have the same questions regarding RoleProvider. Input is greatly appreciated.

    Read the article

  • Database Change Management - Setup for Initial Create Scripts, Subsequent Migration Scripts

    - by Martin Aatmaa
    I've got a database change management workflow in place. It's based on SQL scripts (so, it's not a managed code-based solution). The basic setup looks like this: Initial/ Generate Initial Schema.sql Generate Initial Required Data.sql Generate Initial Test Data.sql Migration 0001_MigrationScriptForChangeOne.sql 0002_MigrationScriptForChangeTwo.sql ... The process to spin up a database is to then run all the Initlal scripts, and then run the sequential Migration scripts. A tool takes case of the versioning requirements, etc. My question is, in this kind of setup, is it useful to also maintain this: Current/ Stored Procedures/ dbo.MyStoredProcedureCreateScript.sql ... Tables/ dbo.MyTableCreateScript.sql ... ... By "this" I mean a directory of scripts (separated by object type) that represents the create scripts for spinning up the current/latest version of the database. For some reason, I really like the idea, but I can't concretely justify it's need. Am I missing something? The advantages would be: For dev and source control, we would have the same object-per-file setup that we're used to For deployment, we can spin up a new DB instance to the latest version either by running the Initial+Migrate, or by running the scripts from Current/ For dev, we do not need a DB instance running in order to do development. We can do "offline" development on the Current/ folder. The disadvantages would be: For each change, we need to update the scripts in the Current/ folder, as well as create a Migration script (in the Migration/ folder) Thanks in advance for any input!

    Read the article

  • How to override ATTR_DEFAULT_IDENTIFIER_OPTIONS in Models in Doctrine?

    - by user309083
    Here someone explained that setting a 'primary' attribute for any row in your Model will override Doctrine_Manager's ATTR_DEFAULT_IDENTIFIER_OPTIONS attribute: http://stackoverflow.com/questions/2040675/how-do-you-override-a-constant-in-doctrines-models This works, however if you have a many to many relation whereby the intermediate table is created, even if you have set both columns in the intermediate to primary an error still results when Doctrine tries to place an index on the nonexistant 'id' column upon table creation. Here's my code: //Bootstrap // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine_Core::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'id', 'type' => 'integer', 'length' => 4)); //User Model class User extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('users'); } public function setUp() { $this->hasMany('Role as roles', array( 'local' => 'id', 'foreign' => 'user_id', 'refClass' => 'UserRole', 'onDelete' => 'CASCADE' )); } } //Role Model class Role extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('roles'); } public function setUp() { $this->hasMany('User as users', array( 'local' => 'id', 'foreign' => 'role_id', 'refClass' => 'UserRole' )); } } //UserRole Model class UserRole extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('roles_users'); $this->hasColumn('user_id', 'integer', 4, array('primary'=>true)); $this->hasColumn('role_id', 'integer', 4, array('primary'=>true)); } } Resulting error: SQLSTATE[42000]: Syntax error or access violation: 1072 Key column 'id' doesn't exist in table. Failing Query: "CREATE TABLE roles_users (user_id INT UNSIGNED NOT NULL, role_id INT UNSIGNED NOT NULL, INDEX id_idx (id), PRIMARY KEY(user_id, role_id)) ENGINE = INNODB". Failing Query: CREATE TABLE roles_users (user_id INT UNSIGNED NOT NULL, role_id INT UNSIGNED NOT NULL, INDEX id_idx (id), PRIMARY KEY(user_id, role_id)) ENGINE = INNODB I'm creating my tables using Doctrine::createTablesFromModels();

    Read the article

  • What is the scope of CONTEXT_INFO in SQL Server?

    - by JasonS
    I am using CONTEXT_INFO to pass a username to a delete trigger for the purposes of an audit/history table. I'm trying to understand the scope of CONTEXT_INFO and if I am creating a potential race condition. Each of my database tables has a stored proc to handle deletes. The delete stored proc takes userId as an parameter, and sets CONTEXT_INFO to the userId. My delete trigger then grabs the CONTEXT_INFO and uses that to update an audit table that indicates who deleted the row(s). The question is, if two deletes sprocs from different users are executing at the same time, can CONTEXT_INFO set in one of the sprocs be consumed by the trigger fired by the other sproc? I've seen this article http://msdn.microsoft.com/en-us/library/ms189252.aspx but I'm not clear on the scope of sessions and batches in SQL Server which is key to the article being helpful! I'd post code, but short on time at the moment. I'll edit later if this isn't clear enough. Thanks in advance for any help.

    Read the article

  • Force float left with no line break no matter what

    - by Tesserex
    I'm guessing this isn't possible, but here goes. I have two tables, and I'm trying to get them to sit side-by-side so that they look like one table. The reason for this, instead of using one larger table, is that the data in the second table needs to be handled on a column basis, not row basis, for performance reasons like caching and AJAX-fetching data. So rather than have to reload the whole table for a single column, I decided to break the column out into a separate table, but have it visually seem like a single table. I can't find a way to forcibly put the second table next to the first. I can float them, but when the first table is too wide, the second one breaks to the next line. Here's the kicker: the width of the first table is dynamic. So I can't just set a huge width to their container. Well, I could set a huge width, like 1000%, but then I have a huge ugly horizontal scroll bar. So is there any way to tell the second table "Stay on that same line, no matter what! And line up right next to the previous element please!"

    Read the article

  • WinForms DataGridView - update database

    - by Geo Ego
    I know this is a basic function of the DataGridView, but for some reason, I just can't get it to work. I just want the DataGridView on my Windows form to submit any changes made to it to the database when the user clicks the "Save" button. I populate the DataGridView according to a function triggered by a user selection in a DropDownList as follows: using (SqlConnection con = new SqlConnection(conString)) { con.Open(); SqlDataAdapter ruleTableDA = new SqlDataAdapter("SELECT rule.fldFluteType AS [Flute], rule.fldKnife AS [Knife], rule.fldScore AS [Score], rule.fldLowKnife AS [Low Knife], rule.fldMatrixScore AS [Matrix Score], rule.fldMatrix AS [Matrix] FROM dbo.tblRuleTypes rule WHERE rule.fldMachine_ID = '1003'", con); DataSet ruleTableDS = new DataSet(); ruleTableDA.Fill(ruleTableDS); RuleTable.DataSource = ruleTableDS.Tables[0]; } In my save function, I basically have the following (I've trimmed out some of the code around it to get to the point): using (SqlDataAdapter ruleTableDA = new SqlDataAdapter("SELECT rule.fldFluteType AS [Flute], rule.fldKnife AS [Knife], rule.fldScore AS [Score], rule.fldLowKnife AS [Low Knife], rule.fldMatrixScore AS [Matrix Score], rule.fldMatrix AS [Matrix] FROM dbo.tblRuleTypes rule WHERE rule.fldMachine_ID = '1003'", con)) { SqlCommandBuilder commandBuilder = new SqlCommandBuilder(ruleTableDA); DataTable dt = new DataTable(); dt = RuleTable.DataSource as DataTable; ruleTableDA.Fill(dt); ruleTableDA.Update(dt); } Okay, so I edited the code to do the following: build the commands, create a DataTable based on the DataGridView (RuleTable), fill the DataAdapter with the DataTable, and update the database. Now ruleTableDA.Update(dt) is throwing the exception "Concurrency violation: the UpdateCommand affected 0 of the expected 1 records."

    Read the article

  • Can Sql Server BULK INSERT read from a named pipe/fifo?

    - by Peter
    Is it possible for BULK INSERT/bcp to read from a named pipe, fifo-style? That is, rather than reading from a real text file, can BULK INSERT/bcp be made to read from a named pipe which is on the write end of another process? For example: create named pipe unzip file to named pipe read from named pipe with bcp or BULK INSERT or: create 4 named pipes split 1 file into 4 streams, writing each stream to a separate named pipe read from 4 named pipes into 4 tables w/ bcp or BULK INSERT The closest I've found was this fellow (site now unreachable), who managed to write to a named pipe w/ bcp, with a his own utility and usage like so: start /MIN ZipPipe authors_pipe authors.txt.gz 9 bcp pubs..authors out \\.\pipe\authors_pipe -T -n But he couldn't get the reverse to work. So before I head off on a fool's errand, I'm wondering whether it's fundamentally possible to read from a named pipe w/ BULK INSERT or bcp. And if it is possible, how would one set it up? Would NamedPipeServerStream or something else in the .NET System.IO.Pipes namespace be adequate? eg, an example using Powershell: [reflection.Assembly]::LoadWithPartialName("system.core") $pipe = New-Object system.IO.Pipes.NamedPipeServerStream("Bob") And then....what?

    Read the article

  • LINQ-to-SQL: Could not find key member 'x' of key 'x' on type 'y'

    - by Austin Hyde
    I am trying to connect my application to a SQLite database with LINQ-to-SQL, and so far everything has worked fine. The only hitch was that the SQLite provider I am using does not support code generation (unless I was doing something wrong), so I manually coded the 4 tables in the DB. The solution builds properly, but will not run, giving me the error message Could not find key member 'ItemType_Id' of key 'ItemType_Id' on type 'Item'. The key may be wrong or the field or property on 'Item' has changed names. I have checked and double checked spellings and field names on the database and in the attribute mappings, but could not find any problems. The SQL for the table looks like this: CREATE TABLE [Items] ( [Id] integer PRIMARY KEY AUTOINCREMENT NOT NULL, [Name] text NOT NULL, [ItemType_Id] integer NOT NULL ); And my mapping code: [Table(Name="Items")] class Item { // [snip] [Column(Name = "Id", IsPrimaryKey=true, IsDbGenerated=true)] public int Id { get; set; } // [snip] [Column(Name="ItemType_Id")] public int ItemTypeId { get; set; } [Association(Storage = "_itemType", ThisKey = "ItemType_Id")] public ItemType ItemType { get { return _itemType.Entity; } set { _itemType.Entity = value; } } private EntityRef<ItemType> _itemType; // [snip] } This is really my first excursion into LINQ-to-SQL, and am learning as I go, but I cannot seem to get past this seeming simple problem. Why cannot LINQ see my association?

    Read the article

  • Concatenate row values T-SQL

    - by Robert
    I am trying to pull together some data for a report and need to concatenate the row values of one of the tables. Here is the basic table structure: Reviews ReviewID ReviewDate Reviewers ReviewerID ReviewID UserID Users UserID FName LName This is a M:M relationship. Each Review can have many Reviewers; each User can be associated with many Reviews. Basically, all I want to see is Reviews.ReviewID, Reviews.ReviewDate, and a concatenated string of the FName's of all the associated Users for that Review (comma delimited). Instead of: ReviewID---ReviewDate---User 1----------12/1/2009----Bob 1----------12/1/2009----Joe 1----------12/1/2009----Frank 2----------12/9/2009----Sue 2----------12/9/2009----Alice Display this: ReviewID---ReviewDate----Users 1----------12/1/2009-----Bob, Joe, Frank 2----------12/9/2009-----Sue, Alice I have found this article describing some ways to do this, but most of these seem to only deal with one table, not multiple; unfortunately, my SQL-fu is not strong enough to adapt these to my circumstances. I am particularly interested in the example on that site which utilizes FOR XML PATH() as that looks the cleanest and most straight forward. SELECT p1.CategoryId, ( SELECT ProductName + ', ' FROM Northwind.dbo.Products p2 WHERE p2.CategoryId = p1.CategoryId ORDER BY ProductName FOR XML PATH('') ) AS Products FROM Northwind.dbo.Products p1 GROUP BY CategoryId; Can anyone give me a hand with this? Any help would be greatly appreciated!

    Read the article

  • Database with "Open Schema" - Good or Bad Idea?

    - by Claudiu
    The co-founder of Reddit gave a presentation on issues they had while scaling to millions of users. A summary is available here. What surprised me is point 3: Instead, they keep a Thing Table and a Data Table. Everything in Reddit is a Thing: users, links, comments, subreddits, awards, etc. Things keep common attribute like up/down votes, a type, and creation date. The Data table has three columns: thing id, key, value. There’s a row for every attribute. There’s a row for title, url, author, spam votes, etc. When they add new features they didn’t have to worry about the database anymore. They didn’t have to add new tables for new things or worry about upgrades. This seems like a terrible idea to me, but it seems to have worked out for Reddit. Is it a good idea in general, though? Or is it a peculiarity of Reddit that happened to work out for them?

    Read the article

  • Legit? Two foreign keys referencing the same primary key.

    - by Ryan
    Hi All, I'm a web developer and have recently started a project with a company. Currently, I'm working with their DBA on getting the schema laid out for the site, and we've come to a disagreement regarding the design on a couple tables, and I'd like some opinions on the matter. Basically, we are working on a site that will implement a "friends" network. All users of the site will be contained in a table tblUsers with (PersonID int identity PK, etc). What I am wanting to do is to create a second table, tblNetwork, that will hold all of the relationships between users, with (NetworkID int identity PK, Owners_PersonID int FK, Friends_PersonID int FK, etc). Or conversely, remove the NetworkID, and have both the Owners_PersonID and Friends_PersonID shared as the Primary key. This is where the DBA has his problem. Saying that "he would only implement this kind of architecture in a data warehousing schema, and not for a website, and this is just another example of web developers trying to take the easy way out." Now obviously, his remark was a bit inflammatory, and that have helped motivate me to find an suitable answer, but more so, I'd just like to know how to do it right. I've been developing databases and programming for over 10 years, have worked with some top-notch minds, and have never heard this kind of argument. What the DBA is wanting to do is instead of storing both the Owners_PersonId and Friends_PersonId in the same table, is to create a third table tblFriends to store the Friends_PersonId, and have the tblNetwork have (NetworkID int identity PK, Owner_PersonID int FK, FriendsID int FK(from TBLFriends)). All that tblFriends would house would be (FriendsID int identity PK, Friends_PersonID(related back to Persons)). To me, creating the third table is just excessive in nature, and does nothing but create an alias for the Friends_PersonID, and cause me to have to add (what I view as unneeded) joins to all my queries, not to mention the extra cycles that will be necessary to perform the join on every query. Thanks for reading, appreciate comments. Ryan

    Read the article

  • ASP MVC C#: LINQ Foreign Key Constraint conflicts

    - by wh0emPah
    I'm having a problem with LINQ. I have 2 tables (Parent-child relation) Table1: Events (EventID, Description) Table2: Groups (GroupID, EventID(FK), Description) Now i want to create an Event an and a child. Event e = new Event(); e.Description = "test"; Datacontext.Events.InsertOnSubmit(event) Group g = new Group(); g.Description = "test2"; g.EventID = e.EventID; Datacontext.Groups.InsertOnSubmit(g); Datacontext.SubmitChanges(); When i debug, i can see that after inserting the event. the EventID has gotten a new value (auto increment). But when Datacontext.SubmitChanges(); gets called. I get the following exception "The INSERT statement conflicted with the FOREIGN KEY constraint ... I know this can be solved by creating a relation in the LINQ diagram between Events and groups. And then setting the entity itself. But i don't want to load the events everytime i ask a list of groups. All i need is some way that when inserting the group fails, the event insert won't be comitted in the database. Sorry if this is a bit unclear, My english isn't really good. Thanks in advance!

    Read the article

  • Is it possible to make a non-nullable column nullable when used in a view? (sql server)

    - by Matt
    Hi, To start off I have two tables, PersonNames and PersonNameVariations. When a name is searched, it finds the closest name to one of the ones available in PersonNames and records it in the PersonNameVariations table if it's not already in there. I am using a stored proc to search the PersonNames for a passed in PersonNameVariationand return the information on both the PersonName found and the PersonNameVariation that was compared to it. Since I am using the Entity Framework, I needed return a complex type in the Import Function but for some reason it says my current framework doesn't support it. My last option was to use an Entity to return in my stored proc instead. The result that I needed back is the information on both the PersonName that was found and the PersonNameVariation that was recorded. Since I cannot return both entities, I created a view PersonSearchVariationInfo and added it into my Entity Framework in order to use it as the entity to return. The problem is that the search will not always return a Person Name match. It needs to be able to return only the PersonNameVariation data in some cases, meaning that all the fields in the PersonSearchVariationInfo pertaining to PersonName need to be nullable. How can I take my view and make some of the fields nullable? When I do it directly in the Entity Framework I get a mapping error: Error 4 Error 3031: Problem in mapping fragments starting at line 1202:Non-nullable column myproject_vw_PersonSearchVariationInfo.DateAdded in table myproject_vw_PersonSearchVariationInfo is mapped to a nullable entity property. C:\Users\Administrator\Documents\Visual Studio 2010\Projects\MyProject\MyProject.Domain\EntityFramework\MyProjectDBEntities.edmx 1203 15 MyProject.Domain Anyone have any ideas? Thanks, Matt

    Read the article

  • Linq, Left Join and Dates...

    - by BitFiddler
    So my situation is that I have a linq-to-sql model that does not allow dates to be null in one of my tables. This is intended, because the database does not allow nulls in that field. My problem, is that when I try to write a Linq query with this model, I cannot do a left join with that table anymore because the date is not a 'nullable' field and so I can't compare it to "Nothing". Example: There is a Movie table, {ID,MovieTitle}, and a Showings table, {ID,MovieID,ShowingTime,Location} Now I am trying to write a statement that will return all those movies that have no showings. In T.SQL this would look like: Select m.* From Movies m Left Join Showings s On m.ID = s.MovieID Where s.ShowingTime is Null Now in this situation I could test for Null on the 'Location' field but this is not what I have in reality (just a simplified example). All I have are non-null dates. I am trying to write in Linq: From m In dbContext.Movies _ Group Join s In Showings on m.ID Equals s.MovieID into MovieShowings = Group _ From ms In MovieShowings.DefaultIfEmpty _ Where ms.ShowingTime is Nothing _ Select ms However I am getting an error saying 'Is' operator does not accept operands of type 'Date'. Operands must be reference or nullable types. Is there any way around this? The model is correct, there should never be a null in the Showings:ShowTime table. But if you do a left join, and there are no show times for a particular movie, then ShowTime SHOULD be Nothing for that movie... Thanks everyone for your help.

    Read the article

  • Deleting a user > need to also delete their project, and then activities for that project? (PHP, MyS

    - by Jamie
    Hi guys, Really stuck with this... basically my system has 4 tables; users, projects, user_projects and activities. The user table has a usertype field which defines whether or not they are admin or user (by an integer)... An admin can create a project, create an acitivity for the project and assign a user (limited access user) an activity. Therefore, this setup means that an admin is never directly associated with an activity (instead a project). When my head admin user deletes an admin, I need all projects and activities (for their projects) to be deleted also. My delete script for a user is simple so far and works, but I'm having trouble on how to gain the projectID in order to know which activities to remove (associated with the projects which are about to be deleted): $userid = $_GET['userid']; $query = "DELETE FROM users WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM projects WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM userprojects WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM activities WHERE projectid=".$projectid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); Now the first three queries execute fine, obviously because the userid is being retrieved successfully. However the 4th and final query I know is wrong, because there is no projectid to be gained from anywhere, however I put it there to help understand what I am trying to get!! :D Im guessing that i would need something like 'WHERE projectid=' then something to gather the removed projects from the userid which can be related to the activities for that project(s)!! Its a simple concept but I'm having trouble...please excuse any bad code as I am learning also. Thanks for any help!

    Read the article

  • JVM CMS Garbage Collecting Issues

    - by jlintz
    I'm seeing the following symptoms on an application's GC log file with the Concurrent Mark-Sweep collector: 4031.248: [CMS-concurrent-preclean-start] 4031.250: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4031.250: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 4036.346: [CMS-concurrent-abortable-preclean: 0.159/5.096 secs] [Times: user=0.00 sys=0.01, real=5.09 secs] 4036.346: [GC[YG occupancy: 55964 K (118016 K)]4036.347: [Rescan (parallel) , 0.0641200 secs]4036.411: [weak refs processing, 0.0001300 secs]4036.411: [class unloading, 0.0041590 secs]4036.415: [scrub symbol & string tables, 0.0053220 secs] [1 CMS-remark: 16015K(393216K)] 71979K(511232K), 0.0746640 secs] [Times: user=0.08 sys=0.00, real=0.08 secs] The preclean process keeps aborting continously. I've tried adjusting CMSMaxAbortablePrecleanTime to 15 seconds, from the default of 5, but that has not helped. The current JVM options are as follows... Djava.awt.headless=true -Xms512m -Xmx512m -Xmn128m -XX:MaxPermSize=128m -XX:+HeapDumpOnOutOfMemoryError -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:BiasedLockingStartupDelay=0 -XX:+DoEscapeAnalysis -XX:+UseBiasedLocking -XX:+EliminateLocks -XX:+CMSParallelRemarkEnabled -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:gc.log -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenPrecleaningEnabled -XX:CMSInitiatingOccupancyFraction=50 -XX:ReservedCodeCacheSize=64m -Dnetworkaddress.cache.ttl=30 -Xss128k It appears the concurrent-abortable-preclean never gets a chance to run. I read through http://blogs.sun.com/jonthecollector/entry/did_you_know which had a suggestion of enabling CMSScavengeBeforeRemark, but the side effects of pausing did not seem ideal. Could anyone offer up any suggestions? Also I was wondering if anyone had a good reference for grokking the CMS GC logs, in particular this line: [1 CMS-remark: 16015K(393216K)] 71979K(511232K), 0.0746640 secs] Not clear on what memory regions those numbers are referring to.

    Read the article

  • VS2008 EF and non crud SP usage.

    - by SteveO
    Using an edmx version of EF. My returned data is a join between tables that has a COMPOUND filter on the primary table. In essence this query is going to return a SEGMENT of Law codes and descriptions that a user can tie to a Sex Offender report. I have a complex SP because Linq2SQL cannot pass in a between statement, or at least that is how I understand the error. The Code itself is broken up by '-' marks. 39-13-504 "Aggravated Sexual Battery" User wants to have a query with 4 parmas 39, 13, 500, 599. Get all codes from Title 39 and Chapter 13 with parts between 500 and 599. I have the SP in place to do the work, is there are way to consume the SP within the EF? I find many blogs about SPs with CRUD operations as their use of an SP. That doesn't fit this need at all. I do not have a single table but a join to the "prior selections" table that maps the key for the code. Any pointers on how to get a READ with an SP? TIA

    Read the article

  • Linq - how does it work??

    - by clarkeyboy
    Hey, I have just been looking into Linq with ASP.Net. It is very neat indeed. I was just wondering - how do all the classes get populated? I mean in ASP.Net, suppose you have a Linq file called Catalogue, and you then use a For loop to loop through Catalogue.Products and print each Product name. How do the details get stored? Does it just go through the Products table on page load and create another instance of class Product for each row, effectively copying an entire table into an array of class Product? If so, I think I have created a system very much like this, in the sense that there is a SiteContent module with an instance of each Manager class - for example there is UserManager, ProductManager, SettingManager and alike. UserManager contains an instance of the User class for each row in the Users table. They also contain methods such as Create, Update and Remove. These Managers and their "Items" are created on every page load. This just makes it nice and easy to access users, products, settings etc in every page as far as I, the developer, am concerned. Any any subsequent pages I need to create, I just need to reference SiteContent.UserManager to access a list of users, rather than executing a query from within that page (ie this method separates out data access from the workings of the page, in the same way as using code behind separates out the workings of the page from how the page is layed out). However the problem is that this technique seems rather slow. I mean it is effectively creating a database on every page load, taking data from another database. I have taken measures such as preventing, for example, the ProductManager from being created if it is not referenced on page load. Therefore it does not load data into storage when it is not needed. My question is basically whether my technique does the exact same thing as Linq, in the sense of duplicating data from tables into properties of classes.. Thanks in advance for any advice or answers about this. Regards, Richard Clarke

    Read the article

  • Opening a View with a Table without a NavigationController

    - by Ken
    Hiya, I'm pretty new to developing with Cocoa Touch/XCode and I came across a problem. I'm making a sort of RSS reader for a newssite and I have 5 views of tables navigated with 5 tabs in a TabBarController. If someone selects a newsitem I want another view to open showing the complete newsitem. My problem is that it won't work. This is my code: - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection(NSInteger)section{ return [[[self rssParser]rssItems]count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{ UITableViewCell * cell = [tableView dequeueReusableCellWithIdentifier:@"rssItemCell"]; if(nil == cell){ cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:@"rssItemCell"]autorelease]; } cell.textLabel.text = [[[[self rssParser]rssItems]objectAtIndex:indexPath.row]title]; cell.detailTextLabel.text = [[[[self rssParser]rssItems]objectAtIndex:indexPath.row]description]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [[self appDelegate] setCurrentlySelectedBlogItem:[[[self rssParser]rssItems]objectAtIndex:indexPath.row]]; [self.appDelegate loadNewsDetails]; } And it calls this method in my delegate: -(void)loadNewsDetails{ [[self rootController]pushViewController:detailController animated:YES]; } Please tell me what I'm doing wrong. BTW I do not want to use a NavigationController, just the tabbar I'm using. Thanks in advance, Ken

    Read the article

  • LINQ to DataSet Dataclass assignment question

    - by Overhed
    Hi all, I'm working on a Silverlight project trying to access a database using LINQ To DataSet and then sending data over to Silverlight via .ASMX web service. I've defined my DataSet using the Server Explorer tool (dragging and dropping all the different tables that I'm interested in). The DataSet is able to access the server and database with no issues. Below is code from one of my Web Methods: public List<ClassSpecification> getSpecifications() { DataSet2TableAdapters.SpecificationTableAdapter Sta = new DataSet2TableAdapters.SpecificationTableAdapter(); return (from Spec in Sta.GetData().AsEnumerable() select new ClassSpecification() { Specification = Spec.Field<String>("Specification"), SpecificationType = Spec.Field<string>("SpecificationType"), StatusChange = Spec.Field<DateTime>("StatusChange"), Spec = Spec.Field<int>("Spec") }).ToList<ClassSpecification>(); } I created a "ClassSpecification" data class which is going to contain my data and it has all the table fields as properties. My question is, is there a quicker way of doing the assignment than what is shown here? There are actually about 10 more fields, and I would imagine that since my DataSet knows my table definition, that I would have a quicker way of doing the assignment than going field by field. I tried just "select new ClassSpecification()).ToList Any help would be greatly appreciated.

    Read the article

  • Cannot update a single field using Linq to Sql

    - by KallDrexx
    I am having a hard time attempting to update a single field without having to retrieve the whole record prior to saving. For example, in my web application I have an in place editor for the Name and Description fields of an object. Once you edit either field, it sends the new field (with the object's ID value) to the web server. What I want is the webserver to take that value and ID and only update the one field. There are only two ways google tells me to do this: 1) When I get the value I want to change, the value and the ID, retrieve the record from the database, update the field in the c# object, and then send it back to the server. I don't like this method because not only does it include a completely unnecessary database read call (which includes two tables due to the way my schema is). 2) Set UpdateCheck for all the fields (but the primary keys) to UpdateCheck.Never. This doesn't work for me (I think) due to my mapping layer between the Linq to Sql and my Entity/ViewModel layer. When I convert my entity into the linq to sql db object it seems to be updating those fields regardless of the UpdateCheck setting. This might be just because of integers, since not setting an int means it is a zero (and no, I can't use int? instead). Are there any other options that I have?

    Read the article

  • MySQL use certain columns, based on other columns

    - by Rabbott
    I have this query: SELECT COUNT(articles.id) AS count FROM articles, xml_documents, streams WHERE articles.xml_document_id = xml_documents.id AND xml_documents.stream_id = streams.id AND articles.published_at BETWEEN '2010-01-01' AND '2010-04-01' AND streams.brand_id = 7 Which just uses the default equajoin by specifying three tables in csv format in the FROM clause.. What I need to do is group this by a value found within articles.source (raw xml).. so it could turn into this: SELECT COUNT(articles.id) AS count, ExtractValue(articles.source, "/article/media_type") AS media_type FROM articles, xml_documents, streams WHERE articles.xml_document_id = xml_documents.id AND xml_documents.stream_id = streams.id AND articles.published_at BETWEEN '2010-01-01' AND '2010-04-01' AND streams.brand_id = 7 GROUP BY media_type which works fine, the problem is, I'm using rails, and using STI for the xml_documents table. The articles.source that is provided to the ExtractValue method will be of a couple different formats.. So what I need to be able to do is use "/article/media_type" IF xml_documents.type = 'source one' and use "/article/source" if xml_documents.type = 'source two' This is just because the two document types format their XML differently, but I don't want to have to run multiple queries to retrieve this information.. It would be nice if one could use a ternary operator, but i don't think this is possible.. EDIT At this Point I am looking at making a temp table, or simply using UNION to place multiple result sets together..

    Read the article

  • How do you use asynchronous ORMs without huge callback chains?

    - by hornairs
    I'm using the relatively immature Joose Javascript ORM plugin (project page) to persist objects in an Appcelerator Titanium (company page) mobile project. Since it's client side storage, the application has to check to see if the database is initialized before starting up the ORM since it inspects the DB tables to construct the classes. My problem is that this sequence of operations (and if this one is like this, other things down the road) takes a lot of callbacks to complete. I have a lot of jumping around in the code that isn't apparent to a maintainer and results in some complex call graphs and whatnot. So, I ask these questions: How would you asynchronously initialize a database and populate it with seed data using an ORM that needs the schema to be correct to function? Do you have any general strategies or links for async/event driven programming and keeping the call graph simple and understandable? Do you have any suggestions for Javascript ORMs/meta object systems that work with HTML 5 as a storage engine and are hopefully framework agnostic? Am I just a big newb and should be able to work this out with ease? Thanks folks!

    Read the article

  • Bulk Copy from one server to another

    - by Joseph
    Hi All, I've one situation where I need to copy part of the data from one server to another. The table schema are exactly same. I need to move partial data from the source, which may or may not be available in the destination table. The solution I'm thinking is, use bcp to export data to a text(or .dat) file and then take that file to the destination as both are not accessible at the same time (Different network), then import the data to the destination. There are some conditions I need to satisfy. 1. I need to export only a list of data from the table, not whole. My client is going to give me IDs which needs to be moved from source to destination. I've around 3000 records in the master table, and same in the child tables too. What I expect is, only 300 records to be moved. 2. If the record exists in the destination, the client is going to instruct as whether to ignore or overwrite case to case. 90% of the time, we need to ignore the records without overwriting, but log the records in a log file. Please help me with the best approach. I thought of using BCP with query option to filter the data, but while importing, how do I bypass inserting the existing records? If I need to overwrite, how to do it? Thanks a lot in advance. ~Joseph

    Read the article

  • Cannot turn off autocommit in a script using the Django ORM

    - by Wes
    I have a command line script that uses the Django ORM and MySQL backend. I want to turn off autocommit and commit manually. For the life of me, I cannot get this to work. Here is a pared down version of the script. A row is inserted into testtable every time I run this and I get this warning from MySQL: "Some non-transactional changed tables couldn't be rolled back". #!/usr/bin/python import os import sys django_dir = os.path.abspath(os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))) sys.path.append(django_dir) os.environ['DJANGO_DIR'] = django_dir os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' from django.core.management import setup_environ from myproject import settings setup_environ(settings) from django.db import transaction, connection cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into testtable values (\'X\')') cursor.execute('rollback') I also tried placing the insert in a function and adding Django's commit_manually wrapper, like so: @transaction.commit_manually def myfunction(): cursor = connection.cursor() cursor.execute('SET autocommit = 0') cursor.execute('insert into westest values (\'X\')') cursor.execute('rollback') myfunction() I also tried setting DISABLE_TRANSACTION_MANAGEMENT = True in settings.py, with no further luck. I feel like I am missing something obvious. Any help you can give me is greatly appreciated. Thanks!

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >