Search Results

Search found 23955 results on 959 pages for 'insert query'.

Page 412/959 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Insriting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • How do I make OneNote 2007 images searchable when inserted via code?

    - by Scott Bruns
    When I insert an image into OneNote 2007 using C# my images have 'Make Text in Image Searchable' set to Disabled. How do I insert an image with Make Text in Image Searchable enabled, or how do I enable this property after the image is imported. I have already imported a lot of images. How to I make the existing imported images searchable? I already know how to do this manually by right clicking the image and setting the language. The OCR works fine, I just need to do it automatically.

    Read the article

  • mysql_connect() acces denied for system@localhost(using password NO)

    - by user309381
    class MySQLDatabase { public $connection; function _construct() { $this-open_connection(); } public function open_connection() { $this-connection = mysql_connect(DB_SERVER,DB_USER,DB_PASS); if(!$this-connection) { die("Database Connection Failed" . mysql_error()); } else { $db_select = mysql_select_db(DB_NAME,$this-connection); if(!$db_select) { die("Database Selection Failed" . mysql_error()); } } } function mysql_prep($value) { if (get_magic_quotes_gpc()) { $value = stripslashes($value); } // Quote if not a number if (!is_numeric($value)) { $value = "'" . mysql_real_escape_string($value) . "'"; } return $value; } public function close_connection() { if(isset($this->connection)) { mysql_close($this->connection); unset($this->connection); } } public function query(/*$sql*/) { $sql = "SELECT*FROM users where id = 1"; $result = mysql_query($sql); $this->confirm_query($result); //return $result; while( $found_user = mysql_fetch_assoc($result)) { echo $found_user ['username']; } } private function confirm_query($result) { if(!$result) { die("The Query has problem" . mysql_error()); } } } $database = new MySQLDatabase(); $database-open_connection(); $database-query(); $database-close_connection(); ?

    Read the article

  • Queue-like data structure with fast search and insertion

    - by Max
    I need a datastructure with the following properties: It contains integer numbers, no duplicates. After it reaches the maximal size the first element is removed. So if the capacity is 3, then this is how it would look when putting in it sequential numbers: {}, {1}, {1, 2}, {1, 2, 3}, {2, 3, 4}, {3, 4, 5} etc. Only two operations are needed: inserting a number into this container (INSERT) and checking if the number is already in the container (EXISTS). The number of EXISTS operations is expected to be approximately 2 * number of INSERT operations. I need these operations to be as fast as possible. What would be the fastest data structure or combination of data structures for this scenario?

    Read the article

  • g_tree_insert overwrites all data

    - by pechenie
    I wonder how I should use the GTree (from GLib) to store data? Every new value I insert into GTree with g_tree_insert routine is overwrite the previous one! GTree *tree; //init tree = g_tree_new( g_str_equal ); //"g_str_equal" is a GLib default compare func //... for( i = 0; i < 100; ++i ) g_tree_insert( tree, random_key(), random_value() ); //insert some random vals // printf( "%d", g_tree_nnodes( tree ) ); //should be 100? NO! Prints "1"!!! What am I doing wrong? Thank you.

    Read the article

  • How can I detect 'any' ajax request being completed using jQuery?

    - by Brian Scott
    I have a page where I can insert some javascript / jquery to manipulate the output. I don't have any other control over the page markup etc. I need to add an extra element via jquery after each present on the page. The issue is that the elements are generated via an asynchronous call on the existing page which occurs after $(document).ready is complete. Essentially, I need a way of calling my jquery after the page has loaded and the subsequent ajax calls have completed. Is there a way to detect the completion of any ajax call on the page and then call my own custom function to insert the additional elements after the newly created s ?

    Read the article

  • copy rows before updating them to preserve archive in Postgres

    - by punkish
    I am experimenting with creating a table that keeps a version of every row. The idea is to be able to query for how the rows were at any point in time even if the query has JOINs. Consider a system where the primary resource is books, that is, books are queried for, and author info comes along for the ride CREATE TABLE authors ( author_id INTEGER NOT NULL, version INTEGER NOT NULL CHECK (version > 0), author_name TEXT, is_active BOOLEAN DEFAULT '1', modified_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (author_id, version) ) INSERT INTO authors (author_id, version, author_name) VALUES (1, 1, 'John'), (2, 1, 'Jack'), (3, 1, 'Ernest'); I would like to be able to update the above like so UPDATE authors SET author_name = 'Jack K' WHERE author_id = 1; and end up with 2, 1, Jack, t, 2012-03-29 21:35:00 2, 2, Jack K, t, 2012-03-29 21:37:40 which I can then query with SELECT author_name, modified_on FROM authors WHERE author_id = 2 AND modified_on < '2012-03-29 21:37:00' ORDER BY version DESC LIMIT 1; to get 2, 1, Jack, t, 2012-03-29 21:35:00 Something like the following doesn't really work CREATE OR REPLACE FUNCTION archive_authors() RETURNS TRIGGER AS $archive_author$ BEGIN IF (TG_OP = 'UPDATE') THEN -- The following fails because author_id,version PK already exists INSERT INTO authors (author_id, version, author_name) VALUES (OLD.author_id, OLD.version, OLD.author_name); UPDATE authors SET version = OLD.version + 1 WHERE author_id = OLD.author_id AND version = OLD.version; RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $archive_author$ LANGUAGE plpgsql; CREATE TRIGGER archive_author AFTER UPDATE OR DELETE ON authors FOR EACH ROW EXECUTE PROCEDURE archive_authors(); How can I achieve the above? Or, is there a better way to accomplish this? Ideally, I would prefer to not create a shadow table to store the archived rows.

    Read the article

  • Create calendardateselect from javascript

    - by 99miles
    Not sure why I can't figure this out. When a user makes a selection, a javascript method is called, at which point I want to add a calendardateselect to an existing div tag from the javascript. According to the docs, it seems like this should work: var a = new CalendarDateSelect( document.getElementById("date_area"), {embedded:true, year_range:10} ); $('date_area').insert(a); ... or maybe this.... var newScript = document.createElement('script'); newScript.type = 'text/javascript'; var tn = document.createTextNode("new CalendarDateSelect( $(this).previous() )"); newScript.appendChild(tn); $('date_area').insert(newScript); http://code.google.com/p/calendardateselect/wiki/JavascriptDocumentation But neither of them are working. I feel like I've tried everything. Ideas?

    Read the article

  • SQL Server problems reading columns with a foreign key

    - by illdev
    I have a weird situation, where simple queries seem to never finish for instance SELECT top 100 ArticleID FROM Article WHERE ProductGroupID=379114 returns immediately SELECT top 1000 ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT top 1000 ArticleID FROM Article returns immediately By 'returning' I mean 'in query analyzer the green check mark appears and it says "Query executed successfully"'. I sometimes get the rows painted to the grid in qa, but still the query goes on waiting for my client to time out - 'sometimes': SELECT ProductGroupID AS Product23_1_, ArticleID AS ArticleID1_, ArticleID AS ArticleID18_0_, Inventory_Name AS Inventory3_18_0_, Inventory_UnitOfMeasure AS Inventory4_18_0_, BusinessKey AS Business5_18_0_, Name AS Name18_0_, ServesPeople AS ServesPe7_18_0_, InStock AS InStock18_0_, Description AS Descript9_18_0_, Description2 AS Descrip10_18_0_, TechnicalData AS Technic11_18_0_, IsDiscontinued AS IsDisco12_18_0_, Release AS Release18_0_, Classifications AS Classif14_18_0_, DistributorName AS Distrib15_18_0_, DistributorProductCode AS Distrib16_18_0_, Options AS Options18_0_, IsPromoted AS IsPromoted18_0_, IsBulkyFreight AS IsBulky19_18_0_, IsBackOrderOnly AS IsBackO20_18_0_, Price AS Price18_0_, Weight AS Weight18_0_, ProductGroupID AS Product23_18_0_, ConversationID AS Convers24_18_0_, DistributorID AS Distrib25_18_0_, type AS Type18_0_ FROM Article AS articles0_ WHERE (IsDiscontinued = '0') AND (ProductGroupID = 379121) shows this behavior. I have no idea what is going on. Probably select is broken ;) I got a foreign key on ProductGroups ALTER TABLE [dbo].[Article] WITH CHECK ADD CONSTRAINT [FK_ProductGroup_Articles] FOREIGN KEY([ProductGroupID]) REFERENCES [dbo].[ProductGroup] ([ProductGroupID]) GO ALTER TABLE [dbo].[Article] CHECK CONSTRAINT [FK_ProductGroup_Articles] there are some 6000 rows and IsDiscontinued is a bit, not null, but leaving this condition out does not change the outcome. Anyone can tell me how to handle such a situation? More info, anyone? Additional Info: this does not seem to be restricted to this Foreign Key, but all/some referencing this entity.

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • Using Subsonic 3.0 Advanced Templates

    - by umit
    Hi all, I've been trying to use Subsonic Advanced Templates in a project for a while but most of the time I find myself writing a Stored Procedure as I can't find a proper way of doing it in code. Subsonic created corresponding objects for my DB tables and for foreign keys it created IQueryable fields inside each object. These fields are not loaded by default and a new SQL query is executed when you access them. 1- Is there a way to get all data in one query (deep load)? Also these fields can not be assigned. So when I want to create an object in a maintenance page, I can't put all the data into this object before saving it in DB: Post post = new Post(); //get photos for this post IList<PostPhoto> postPhotos = GetPostPhotos(); post.PostPhotos = postPhotos; 2- Is it possible to have one Post object with all fields set from user input? Think of the Post object above and assume I've successfully assigned its fields. Now I need to save it to the DB. 3- Is using BatchQuery the only way to do it in one query? If I have 4 photos in PostPhotos field; 2 of them previously saved and 2 of them new, can I use the Update method to handle both the adding and updating of these photos? Any ideas or links are appreciated. Cheers...

    Read the article

  • TSQL ID generation

    - by Markus
    Hi. I have a question regarding locking in TSQL. Suppose I have a the following table: A(int id, varchar name), where id is the primary key, but is NOT an identity column. I want to use the following pseudocode to insert a value into this table: lock (A) uniqueID = GenerateUniqueID() insert into A values (uniqueID, somename) unlock(A) How can this be accomplished in terms of TSQL? The computation of the next id should be done with the table A locked in order to avoid other sessions to do the same operation at the same time and get the same id.

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Inserting into a bitstream

    - by evilertoaster
    I'm looking for a way to efficiently insert bits into a bitstream and have it 'overflow', padding with 0's. So for example if you had a byte array with 2 bytes: 231 and 109 (11100111 01101101), and did BitInsert(byteArray,4,00) it would insert two bits at bit offset 4 making 11100001 11011011 01000000 (225,219,24). It would be ok even the method only allowed 1 bit insertions e.g. BitInsert(byteArray,4,true) or BitInsert(byteArray,4,false). I have one method of doing it, but it has to walk the stream with a bitmask bit by bit, so I'm wondering if there's a simpler approach... Answers in assembly or a C derivative would be appreciated.

    Read the article

  • when i click on checkbox ,the image should be hiden though i dont make it happen somehow and i can g

    - by user309381
    function Psend() { new Ajax.Request('Handler.ashx', { method: 'get', onSuccess: function(transport) { var response = transport.responseText || "no response text"; //alert("Success! \n\n" + response); var obj = response.evalJSON(true); for (i = 0; i < 4; i++) { DeCheBX = $('MyDiv').insert(new Element('input', { 'type': 'checkbox', 'id': "Img" + obj[i].Nam, 'value': obj[i].IM, 'onClick': 'SayHi(this,i)' })); document.body.appendChild(DeCheBX); DeImg = $('MyDiv').insert(new Element('img', { 'id': "img" + obj[i].Nam, 'src': obj[i].IM })); document.body.appendChild(DeImg); SayHi = function(x,i) { try { if ($(x).checked == true) { img = "img" + obj[i].Nam; alert(img); $('img').hide(); } } catch (e) { alert("error"); } }; } }, onFailure: function() { alert('Something went wrong...') } }); }

    Read the article

  • Drag and drop an image from desktop to a web text editor (implementation in javascript)

    - by fatmatto
    I tried to write reasonably short title but i failed i guess.. Hi everybody here's what i'm trying to do: I want to implement a web text editor able to recognize when the user drag a image file over it's editing surface and it automa(gically) starts the upload and insert the image near the cursor position. In other words i don't want the user to do the usual "insert-image-browse-ok". Atm i am not very good at javascript ... i know JQuery but i have not a clear idea about how to implement this... i don't know if there's an event handler able to help me in this situation... if not then there should be i think or web apps would miss some kind of interactivity. I've heard miracles about HTML5 could it help me? I've seen such things in Google Wave but that surface doesn't seem to be a form field... google lab's black magic i guess.... Thank you in advance.

    Read the article

  • Aquamacs and IDLWAVE

    - by nicolavianello
    I've just installed the new Aquamacs 2.0 in my Mac Os X 10.6.3 and I'm an happy user of IDLWAVE on Aquamacs for programming in IDL. Unfortunately I run into a problem which I can't understand. I used in my configuration file to put the following (setq idlwave-surround-by-blank t) for the beautiful space around operator. This used to work till Aquamacs 2.0 preview b3 (third beta release) from that on, it stops to work and every time I type an operator (the same for '=' '<' '' etc) I got the following message Debugger entered--Lisp error: (void-variable idlwave-expand-equal) (lambda nil (interactive) (self-insert-command 1) idlwave-expand- equal -1 -1)() call-interactively((lambda nil (interactive) (self-insert-command 1) idlwave-expand-equal -1 -1) nil nil) Any help is welcommed

    Read the article

  • Creating a function in Postgresql that does not return composite values

    - by celenius
    I'm learning how to write functions in Postgresql. I've defined a function called _tmp_myfunction() which takes in an id and returns a table (I also define a table object type called _tmp_mytable) -- create object type to be returned CREATE TYPE _tmp_mytable AS ( id integer, cost double precision ); -- create function which returns query CREATE OR REPLACE FUNCTION _tmp_myfunction( id integer ) RETURNS SETOF _tmp_mytable AS $$ BEGIN RETURN QUERY SELECT id, cost FROM sales WHERE id = sales.id; END; $$ LANGUAGE plpgsql; This works fine when I use one id and call it using the following approach: SELECT * FROM _tmp_myfunction(402); What I would like to be able to do is to call it, but to use a column of values instead of just one value. However, if I use the following approach I end up with all values of the table in one column, separated by commas: -- call function using all values in a column SELECT _tmp_myfunction(t.id) FROM transactions as t; I understand that I can get the same result if I use SELECT _tmp_myfunction(402); instead of SELECT * FROM _tmp_myfunction(402); but I don't know how to construct my query in such a way that I can separate out the results.

    Read the article

  • Display data requested by an ajax.load() call once complete, not during the call.

    - by niczoom
    My jQuery code (using ajax) request's data from a php script (pgiproxy.php) using the following function: function grabPage($pageURL) { $homepage = file_get_contents($pageURL); echo $homepage; } I then extract the html code i need from the returned data using jQuery and insert it into a div called #BFX, as follows: $("#btnNewLoadMethod1").click(function(){ $('#temp1').load('pgiproxy.php', { data : $("#formdata").serialize(), mode : "graph"} , function() { $('#temp').html( $('#temp1').find('center').html() ); $('#BFX').html( $('#temp').html() ); }); }); This works fine. I get the html data (which is a gif image) i need displayed on screen in the correct div. The problem is i can see the html data loading into the div (dependant on network speed), but what I want is to insert the extracted html code into #BFX ONLY when the ajax request has fully completed.

    Read the article

  • Problem calling stored procedure with a fixed length binary parameter using Entity Framework

    - by Dave
    I have a problem calling stored procedures with a fixed length binary parameter using Entity Framework. The stored procedure ends up being called with 8000 bytes of data no matter what size byte array I use to call the function import. To give some example, this is the code I am using. byte[] cookie = new byte[32]; byte[] data = new byte[2]; entities.Insert("param1", "param2", cookie, data); The parameters are nvarchar(50), nvarchar(50), binary(32), varbinary(2000) When I run the code through SQL profiler, I get this result. exec [dbo].[Insert] @param1=N'param1',@param2=N'param2',@cookie=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 [SNIP because of 16000 zeros] ,@data=0x0000 All parameters went through ok other than the binary(32) cookie. The varbinary(2000) seemed to work fine and the correct length was maintained. Is there a way to prevent the extra data being sent to SQL server? This seems like a big waste of network resource.

    Read the article

  • BasicDBObject or QueryBuilder and some newbie questions of Java and mongo

    - by Kevin Xu
    hi I'm a fresh newbie to mongodb Q1 using query=new BasicDBObject(); query.put("i", new BasicDBObject("$gt",13)); and query=new QueryBuilder().put("i").Greaterthan(13).get() is there any difference inside of the system? Q2 I've created a class class findkv extends BasicDBObject{ //is gt gte lt lte public findkv(String fieldname,String op,Object tvalue) { if (op=="") this.put(fieldname,tvalue); else this.put(fieldname, new BasicDBObject(op,tvalue)); } } shall I use it or shall I just use original function? Q3 I've used mongo shell for a few weeks, and was customed to it, and find writing in mongo shell faster and shorter, which side has more advantage, writing in mongo or in java? I shall dump them from mongo to mysql Q4 I've an if (statement==true) return else dowhat; seems can't be compiled I know I can write if (statement!=true) dowhat else return, but can I still write in first style? q5 my eclipse is Eclipse Java EE IDE for Web Developers. Version: Juno Release Build id: 20120614-1722 I'd like to install Perl which I haven't learned yet I choose Install Update http://e-p-i-c.sf.net/updates/testing but it doesn't work, any method to install perl to eclipse manually?

    Read the article

  • openDatabase Hello World

    - by cf_PhillipSenn
    I'm trying to learn about openDatabase, and I think this I'm getting it to INSERT INTO TABLE1, but I can't verify that the SELECT * FROM TABLE1 is working. <html> <head> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1"); </script> <script type="text/javascript"> var db; $(function(){ db = openDatabase('HelloWorld'); db.transaction( function(transaction) { transaction.executeSql( 'CREATE TABLE IF NOT EXISTS Table1 ' + ' (TableID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, ' + ' Field1 TEXT NOT NULL );' ); } ); db.transaction( function(transaction) { transaction.executeSql( 'SELECT * FROM Table1;',function (transaction, result) { for (var i=0; i < result.rows.length; i++) { alert('1'); $('body').append(result.rows.item(i)); } }, errorHandler ); } ); $('form').submit(function() { var xxx = $('#xxx').val(); db.transaction( function(transaction) { transaction.executeSql( 'INSERT INTO Table1 (Field1) VALUES (?);', [xxx], function(){ alert('Saved!'); }, errorHandler ); } ); return false; }); }); function errorHandler(transaction, error) { alert('Oops. Error was '+error.message+' (Code '+error.code+')'); transaction.executeSql('INSERT INTO errors (code, message) VALUES (?, ?);', [error.code, error.message]); return false; } </script> </head> <body> <form method="post"> <input name="xxx" id="xxx" /> <p> <input type="submit" name="OK" /> </p> <a href="http://www.google.com">Cancel</a> </form> </body> </html>

    Read the article

  • Should Factories Persist Entities?

    - by mxmissile
    Should factories persist entities they build? Or is that the job of the caller? Pseudo Example Incoming: public class OrderFactory { public Order Build() { var order = new Order(); .... return order; } } public class OrderController : Controller { public OrderController(IRepository repository) { this.repository = repository; } public ActionResult MyAction() { var order = factory.Build(); repository.Insert(order); ... } } or public class OrderFactory { public OrderFactory(IRepository repository) { this.repository = repository; } public Order Build() { var order = new Order(); ... repository.Insert(order); return order; } } public class OrderController : Controller { public ActionResult MyAction() { var order = factory.Build(); ... } } Is there a recommended practice here?

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >