Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 455/780 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • ajax to populate an input type text

    - by kawtousse
    hi, I have an input type text that i want to populate it with a value from data base using the ajax technique. first i define my text zone like the following: <td><input type=text id='st' value=" " name='stname' onclick="donnom();" /></td> in javascript i do the following: xhr5.onreadystatechange = function(){ if(xhr5.readyState == 4 && xhr5.status == 200) { selects5 = xhr5.responseText; // On se sert de innerHTML pour rajouter les options a la liste document.getElementById('st').innerHTML = selects5; } }; xhr5.open("POST","ajaxIDentifier5.jsp",true); xhr5.setRequestHeader('Content-Type','application/x-www-form-urlencoded'); id=document.getElementById(idIdden).value; xhr5.send("id="+id); in IDentifier5.jsp i put the next code: '<%String id=request.getParameter("id"); System.out.println("idDailyTimeSheet ajaxIDentifier5 as is:"+id); Session s = null; Transaction tx; try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery("select from Dailytimesheet dailytimesheet where dailytimesheet.IdDailyTimeSheet="+id+" " ); for(Iterator it=query.iterate();it.hasNext();) { if(it.hasNext()) { Dailytimesheet object=(Dailytimesheet)it.next(); out.print( "<input type=\"text\" id=\"st1\" value=\""+object.getTimeFrom()+"\" name=\"starting\" onclick=\"donnom()\" ></input>"); } } }catch (HibernateException e) { e.printStackTrace();} %> i want to get only the value in the input type text populated from database because after that i will be able to change it . thanks for help.

    Read the article

  • .Net Entity objectcontext thread error

    - by Chris Klepeis
    I have an n-layered asp.net application which returns an object from my DAL to the BAL like so: public IEnumerable<SourceKey> Get(SourceKey sk) { var query = from SourceKey in _dataContext.SourceKeys select SourceKey; if (sk.sourceKey1 != null) { query = from SourceKey in query where SourceKey.sourceKey1 == sk.sourceKey1 select SourceKey; } return query.AsEnumerable(); } This result passes through my business layer and hits the UI layer to display to the end users. I do not lazy load to prevent query execution in other layers of my application. I created another function in my DAL to delete objects: public void Delete(SourceKey sk) { try { _dataContext.DeleteObject(sk); _dataContext.SaveChanges(); } catch (Exception ex) { Debug.WriteLine(ex.Message + " " + ex.StackTrace + " " + ex.InnerException); } } When I try to call "Delete" after calling the "Get" function, I receive this error: New transaction is not allowed because there are other threads running in the session This is an ASP.Net app. My DAL contains an entity data model. The class in which I have the above functions share the same _dataContext, which is instantiated in my constructor. My guess is that the reader is still open from the "Get" function and was not closed. How can I close it?

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

  • PayPal IPN "FAIL"

    - by Rob
    Hey all... I'm trying to figure out how to use PayPal's IPN and I've run into a wall. I want a buyer to be forwarded to a success page after making a purchase, and I want that page to show the details of their transaction. I choose IPN instead of the PDT because I also want to do some other behind the scenes stuff with their data. Anyway, here's the code I'm using -- I'm testing in sandbox mode -- but it returns "FAIL" every time. $req = 'cmd=_notify-validate'; foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } // post back to PayPal system to validate $header = "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; $fp = fsockopen ('www.sandbox.paypal.com', 80, $errno, $errstr, 30); if (!$fp) { // HTTP ERROR } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { // PAYMENT VALIDATED & VERIFIED! echo "Validated!"; } else if (strcmp ($res, "INVALID") == 0) { // PAYMENT INVALID & INVESTIGATE MANUALY! echo "Invalid!"; } } fclose ($fp); }

    Read the article

  • How do i pass arbitary date format from C# to sql backend

    - by Jims
    I have a datetime field for the transaction date in the back end. So I am passing that date from front C#.net, in the below format: 2011-01-01 12:17:51.967 to do this I have written: presentation layer: string date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); PropertyClass prp=new PropertyClass(); Prp.TransDate=Convert.ToDateTime(date); PropertyClass structure: Public class property { private DateTime transdate; public DateTime TransDate { get { return transdate; } set { transdate = value; } } } From DAL layer passing the TransactionDate like this: Cmd.Parameters.AddWithValue("@TranSactionDate”, SqlDbType.DateTime).value=propertyobj.TransDate; While debugging from presntation layer: string date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); in this I am getting correct expected date format, but when debugs goes to this line Prp.TransDate=Convert.ToDateTime(date); again date format changing to 1/1/2011. But my backend sql datefield wants the date paramter 2011-01-01 12:17:51.967 in this format otherwise throwing exception invalid date format. Note: While passing date as string without converting to datetime getting exceptions like: System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM. at System.Data.SqlTypes.SqlDateTime.FromTimeSpan(TimeSpan value) at System.Data.SqlTypes.SqlDateTime.FromDateTime(DateTime value) at System.Data.SqlTypes.SqlDateTime..ctor(DateTime value) at System.Data.SqlClient.MetaType.FromDateTime(DateTime dateTime, Byte cb) at System.Data.SqlClient.TdsParser.WriteValue(Object value, MetaType type, Byte scale, Int32 actualLength, Int32 encodingByteSize, Int32 offset, TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc)

    Read the article

  • Turning a JSON list into a POJO

    - by Josh L
    I'm having trouble getting this bit of JSON into a POJO. I'm using Jackson configured like this: protected ThreadLocal<ObjectMapper> jparser = new ThreadLocal<ObjectMapper>(); public void receive(Object object) { try { if (object instanceof String && ((String)object).length() != 0) { ObjectDefinition t = null ; if (parserChoice==0) { if (jparser.get()==null) { jparser.set(new ObjectMapper()); } t = jparser.get().readValue((String)object, ObjectDefinition.class); } Object key = t.getKey(); if (key == null) return; transaction.put(key,t); } } catch (Exception e) { e.printStackTrace(); } } Here's the JSON that needs to be turned into a POJO: { "id":"exampleID1", "entities":{ "tags":[ { "text":"textexample1", "indices":[ 2, 14 ] }, { "text":"textexample2", "indices":[ 31, 36 ] }, { "text":"textexample3", "indices":[ 37, 43 ] } ] } And lastly, here's what I currently have for the java class: protected Entities entities; @JsonIgnoreProperties(ignoreUnknown = true) protected class Entities { public Entities() {} protected Tags tags; @JsonIgnoreProperties(ignoreUnknown = true) protected class Tags { public Tags() {} protected String text; public String getText() { return text; } public void setText(String text) { this.text = text; } }; public Tags getTags() { return tags; } public void setTags(Tags tags) { this.tags = tags; } }; //Getters & Setters ... I've been able to translate the more simple objects into a POJO, but the list has me stumped. Any help is appreciated. Thanks!

    Read the article

  • Facebook Open Graph - post to all approved users feeds.

    - by simnom
    Hi, I'm struggling to get to grips with posting a feed item to all the members of an approved application. Within the application settings for the user it is stating that the application has permission to post to the wall but I can only achieve this if that user is currently logged in to facebook. Obviously I would like this to function so that any items I uploaded are posted to all the members of the application at any one time. I am using the Facebook PHP SDK from http://github.com/facebook/php-sdk/ and currrently my code is as follows: require 'src/facebook.php'; //Generates access token for this transaction $accessToken = file_get_contents("https://graph.facebook.com/oauth/access_token?type=client_cred&client_id=MyAppId&client_secret=MySecret"); //Gets the full user details as an object $contents = json_decode(file_get_contents("https://graph.facebook.com/SomeUserId?scope=publish_stream&" . $accessToken)); print_r($contents); if ($facebook->api('/' . $contents->id . '/feed', 'POST', array( 'title' => 'New and Improved, etc - 12/03/2010', 'link' => 'http://www.ib3.co.uk/news/2010/03/12/new-and-improved--etc', 'picture' => 'http://www.ib3.co.uk/userfiles/image/etc-booking.jpg', 'scope' => 'publish_stream' ) )==TRUE) { echo "message posted"; } else { echo "message failed"; } The output from $contents shows the expected user details but nothing relating to the permissions for my application. Am I missing a trick here? Then using the $facebook-api() function I am receiving a #200 - Permissions error. The application does not have permission to perform this action. This is driving me a little potty as I suspect I'm missing something straightforward with the authorisation but what? Many thanks in advance for an assistance offered.

    Read the article

  • How to have a where clause on an insert or an update in Linq to Sql?

    - by Kelsey
    I am trying to convert the following stored proc to a LinqToSql call (this is a simplied version of the SQL): INSERT INTO [MyTable] ([Name], [Value]) SELECT @name, @value WHERE NOT EXISTS(SELECT [Value] FROM [MyTable] WHERE [Value] = @value) The DB does not have a constraint on the field that is getting checked for so in this specific case the check needs to be made manually. Also there are many items constantly being inserted as well so I need to make sure that when this specific insert happens there is no dupe of the value field. My first hunch is to do the following: using (TransactionScope scope = new TransactionScope()) { if (Context.MyTables.SingleOrDefault(t => t.Value == in.Value) != null) { MyLinqModels.MyTable t = new MyLinqModels.MyTable() { Name = in.Name, Value = in.Value }; // Do some stuff in the transaction scope.Complete(); } } This is the first time I have really run into this scenario so I want to make sure I am going about it the right way. Does this seem correct or can anyone suggest a better way of going about it without having two seperate calls? Edit: I am running into a similar issue with an update: UPDATE [AnotherTable] SET [Code] = @code WHERE [ID] = @id AND [Code] IS NULL How would I do the same check with Linqtosql? I assume I need to do a get and then set all the values and submit but what if someone updates [Code] to something other than null from the time I do the get to when the update executes? Same problem as the insert...

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • What's the best way to learn .NET?

    - by duffymo
    I've been developing Java EE for quite a while now. I've used WebLogic, Tomcat, Spring, and Hibernate extensively, so I have a mental model of what features are available and how things are developed and deployed. The problem that I have with .NET is that I don't have a clear mapping of its features onto Java EE. Here's what I know so far: Java EE - .NET Java - C# JAR - DLL WAR - ? (deployment in general) EAR - ? (deployment in general) Tomcat - IIS web server JSP - ASP? JDBC - ODBC JMS - MSMQ JTA - Microsoft Transaction Manager So much of the functionality that WebLogic handles appears to be dispersed throughout the Windows OS. My confusion kicks in when I see the waves of books at Borders - VB.NET, ASP.NET, C#, etc. If I'm not a VB programmer, would it be possible to stick with C# and write enterprise apps that are the equivalent of what I'm used to with Java EE? If there were a Top Three list of books to learn from, what would they be? The "Head First" series has certainly been successful for Java. http://www.amazon.com/Head-First-C-Brain-Friendly-Guides/dp/0596514824/ref=sr_1_1?ie=UTF8&s=books&qid=1224121193&sr=8-1 Equally well recommended for .NET learning? Thanks. - %

    Read the article

  • Payment gateways and XSS

    - by Rowan Parker
    Hi all, I'm working on a website which takes payment from a customer. I'm using Kohana 2.3.4 and have created a library to handle the payment gateway I use (www.eway.com.au). Basically I'm just using their sample code, copied into it's own class. Anyway, the code works fine and I can make payments, etc. The issue I have is when the payment gateway is returning the user to my site. The payment gateway uses HTTPS so that is secure, and it is sending the user back to a HTTPS page on my site. However I have the NoScript plugin installed in Firefox, and when I get sent back to the page on my website (which also handles storing the transaction data) I get an error message saying that NoScript has blocked a potential XSS attack. Now I understand why it's unsecure (POST data being sent across two different domains) but what should I be doing instead? Obviously during my testing here I temporarily disable NoScript and it all works fine, but I can't rely on that for the end users. What's the best practice here?

    Read the article

  • mysql and trigger usage question

    - by dhruvbird
    I have a situation in which I don't want inserts to take place (the transaction should rollback) if a certain condition is met. I could write this logic in the application code, but say for some reason, it has to be written in MySQL itself (say clients written in different languages will be inserting into this MySQL InnoDB table) [that's a separate discussion]. Table definition: CREATE TABLE table1(x int NOT NULL); The trigger looks something like this: CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN NEW.x = NULL; END IF; END; I am guessing it could also be written as(untested): CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN ROLLBACK; END IF; END; But, this doesn't work: CREATE TRIGGER t1 BEFORE INSERT ON table1 ROLLBACK; You are guaranteed that: Your DB will always be MySQL Table type will always be InnoDB That NOT NULL column will always stay the way it is Question: Do you see anything objectionable in the 1st method?

    Read the article

  • NHibernate slow mapping

    - by Rob A
    My question is what can I do to determine the cause of the slowness, or what can I do to speed it up without knowing the exact cause. I am running a simple query and it appears that the mapping back to the entities is taking taking forever. The result set is 350, which is not much data in my opinion. IRepository repo = ObjectFactory.GetInstance<IRepository>(); var q = repo.Query<Order>(item => item.Ordereddate > DateTime.Now.AddDays(-40)); foreach (var order in q) { Console.WriteLine(order.TransactionNumber); } The profiler is telling me it is executing the query 7ms / 35257ms, I am assuming that the former is the actual response from the db and the latter is the time it takes NH to do it's magic. 35 seconds is too long. This is a simple mapping, one table, nested components, using fluent interface to do mappings. I just start up a simple console app and run the one query, the slowness is measured after the SessionFactory is initialized, there should only be one session, and I am not using a transaction. Thanks

    Read the article

  • Android - Temporal & Ancestral Navigation with ViewPager & Fragments

    - by rascuache
    I'm trying to work out the best way to implement the 'ancestral' and 'temporal' navigation (utilising the up button and back buttons), for a music player I'm working on. Currently, the user is presented with a viewpager, and can page between three main fragments (ArtistMain, AlbumMain and SongMain). Upon choosing an item inside that view, a fragment transaction occurs, and the viewpager goes out of view, replaced by a new fragment (AlbumSub, Songsub or player, depending on where the user came from). The user can then navigate deeper, until a song is chosen, and then they are taken to the 'player' screen. I guess the question is: How do I implement all of this conditional navigation? I'm fairly new to android and programming in general, and I just can't seem to come up with an efficient way to achieve this. At the moment, as each fragment is brought into view, the app is checking to see where the user just came from, and then determines where the user should be taken if back or home is called. This means I have a booleans like "fromArtistMain", "fromAlbumSub", and I'm checking for things like "fromSongSub && fromPlayer".. it's all turning into a bit of a mess. I've drawn a diagram (in paint, sorry!!), to depict the navigation I'm trying to achieve. The yellow represents the 'up' button press, the red is the 'back' button press, and blue is just normal navigation. The green arrows are meant to represent the view paging: Any advice is welcome. It might take something really simple that I've just overlooked. Thanks for your time.

    Read the article

  • Detecting branch reintegration or merge in pre-commit script

    - by Shawn Chin
    Within a pre-commit script, is it possible (and if so, how) to identify commits stemming from an svn merge? svnlook changed ... shows files that have changed, but does not differentiate between merges and manual edits. Ideally, I would also like to differentiate between a standard merge and a merge --reintegrate. Background: I'm exploring the possibility of using pre-commit hooks to enforce SVN usage policies for our project. One of the policies state that some directories (such as /trunk) should not be modified directly, and changed only through the reintegration of feature branches. The pre-commit script would therefore reject all changes made to these directories apart from branch reintegrations. Any ideas? Update: I've explored the svnlook command, and the closest I've got is to detect and parse changes to the svn:mergeinfo property of the directory. This approach has some drawback: svnlook can flag up a change in properties, but not which property was changed. (a diff with the proplist of the previous revision is required) By inspecting changes in svn:mergeinfo, it is possible to detect that svn merge was run. However, there is no way to determine if the commits are purely a result of the merge. Changes manually made after the merge will go undetected. (related post: Diff transaction tree against another path/revision)

    Read the article

  • How to get resultset with stored procedure calls over two linked servers?

    - by räph
    I have problems filling a temporary table with the resultset from a procedure call on a linked server, in which again a procedure on another server is called. I have a Stored Procedure sproc1 with the following code, which calls another procedure sproc2 on a linked server. SET @sqlCommand = 'INSERT INTO #tblTemp ( ModuleID, ParamID) ' + '( SELECT * FROM OPENQUERY(' + @targetServer + ', ' + '''SET FMTONLY OFF; EXEC ' + @targetDB + '.usr.sproc2 ' + @param + ''' ) )' exec ( @sqlCommand ) Now in the called sproc2 I again call a third procedure sproc3 on another linked server, which returns my resultset. SET @sqlCommand = 'EXEC ' + @targetServer +'.database.usr.sproc3 ' + @param exec ( @sqlCommand ) The whole thing doen't work, as I get an SQL error 7391 The operation could not be performed because OLE DB provider "%ls" for linked server "%ls" was unable to begin a distributed transaction. I already checked the hints at this microsoft article, but without success. But maybe, I can change the code in sproc1. Would there be some alternative to the temp table and the open query? Just calling stored procedures from server A to B to C and returning the resultset is working (I do this often in the application). But this special case with the temp table and openquery doesn't work! Or is it just not possible what I am trying to do? The microsft article states: Check the object you refer on the destination server. If it is a view or a stored procedure, or causes an execution of a trigger, check whether it implicitly references another server. If so, the third server is the source of the problem. Run the query directly on the third server. If you cannot run the query directly on the third server, the problem is not actually with the linked server query. Resolve the underlying problem first. Is this my case? PS: I can't avoid the architecture with the three servers.

    Read the article

  • Google Analytics Script inside XML File

    - by mnml
    Hi, I would like to know If I can add my ecommerce analytics tags inside an xml formated file? <?xml version="1.0" encoding="UTF-8" standalone="yes" ?> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXX-X']); _gaq.push(['_trackPageview']); _gaq.push(['_addTrans', '1234', // order ID - required 'Acme Clothing', // affiliation or store name '11.99', // total - required '1.29', // tax '5', // shipping 'San Jose', // city 'California', // state or province 'USA' // country ]); // add item might be called for every item in the shopping cart // where your ecommerce engine loops through each item in the cart and // prints out _addItem for each _gaq.push(['_addItem', '1234', // order ID - required 'DD44', // SKU/code 'T-Shirt', // product name 'Green Medium', // category or variation '11.99', // unit price - required '1' // quantity - required ]); _gaq.push(['_trackTrans']); //submits transaction to the Analytics servers (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); </script>

    Read the article

  • Hibernate noob fetch join problem

    - by Bruce
    Hi all I have two classes, Test2 and Test3. Test2 has an attribute test3 that is an instance of Test3. In other words, I have a unidirectional OneToOne association, with test2 having a reference to test3. When I select Test2 from the db, I can see that a separate select is being made to get the details of the associated test3 class. This is the famous 1+N selects problem. To fix this to use a single select, I am trying to use the fetch=join annotation, which I understand to be @Fetch(FetchMode.JOIN) However, with fetch set to join, I still see separate selects. Here are the relevant portions of my setup.. hibernate.cfg.xml: <property name="max_fetch_depth">2</property> Test2: public class Test2 { @OneToOne (cascade=CascadeType.ALL , fetch=FetchType.EAGER) @JoinColumn (name="test3_id") @Fetch(FetchMode.JOIN) public Test3 getTest3() { return test3; } NB I set the FetchType to EAGER out of desperation, even though it defaults to EAGER anyway for OneToOne mappings, but it made no difference. Thanks for any help! Edit: I've pretty much given up on trying to use FetchMode.JOIN - can anyone confirm that they have got it to work ie produce a left outer join? In the docs I see that "Usually, the mapping document is not used to customize fetching. Instead, we keep the default behavior, and override it for a particular transaction, using left join fetch in HQL" If I do a left join fetch instead: query = session.createQuery("from Test2 t2 left join fetch t2.test3"); then I do indeed get the results I want - ie a left outer join in the query.

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

  • SQL Server: preventing dirty reads in a stored procedure

    - by pcampbell
    Consider a SQL Server database and its two stored procs: *1. A proc that performs 3 important things in a transaction: Create a customer, call a sproc to perform another insert, and conditionally insert a third record with the new identity. BEGIN TRAN INSERT INTO Customer(CustName) (@CustomerName) SELECT @NewID = SCOPE_IDENTITY() EXEC CreateNewCustomerAccount @NewID, @CustomerPhoneNumber IF @InvoiceTotal > 100000 INSERT INTO PreferredCust(InvoiceTotal, CustID) VALUES (@InvoiceTotal, @NewID) COMMIT TRAN *2. A stored proc which polls the Customer table for new entries that don't have a related PreferredCust entry. The client app performs the polling by calling this stored proc every 500ms. A problem has arisen where the polling stored procedure has found an entry in the Customer table, and returned it as part of its results. The problem was that it has picked up that record, I am assuming, as part of a dirty read. The record ended up having an entry in PreferredCust later, and ended up creating a problem downstream. Question How can you explicitly prevent dirty reads by that second stored proc? The environment is SQL Server 2005 with the default configuration out of the box. No other locking hits are given in either of these stored procedures.

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • Reading/Writing DataTables to and from an OleDb Database LINQ

    - by jsmith
    My current project is to take information from an OleDbDatabase and .CSV files and place it all into a larger OleDbDatabase. I have currently read in all the information I need from both .CSV files, and the OleDbDatabase into DataTables.... Where it is getting hairy is writing all of the information back to another OleDbDatabase. Right now my current method is to do something like this: OleDbTransaction myTransaction = null; try { OleDbConnection conn = new OleDbConnection("PROVIDER=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Database); conn.Open(); OleDbCommand command = conn.CreateCommand(); string strSQL; command.Transaction = myTransaction; strSQL = "Insert into TABLE " + "(FirstName, LastName) values ('" + FirstName + "', '" + LastName + "')"; command.CommandType = CommandType.Text; command.CommandText = strSQL; command.ExecuteNonQuery(); conn.close(); catch (Exception) { // IF invalid data is entered, rolls back the database myTransaction.Rollback(); } Of course, this is very basic and I'm using an SQL command to commit my transactions to a connection. My problem is I could do this, but I have about 200 fields that need inserted over several tables. I'm willing to do the leg work if that's the only way to go. But I feel like there is an easier method. Is there anything in LINQ that could help me out with this?

    Read the article

  • At which line in the following code should I commit my unit of work?

    - by Pure.Krome
    I have the following code which is in a transaction. I'm not sure where/when I should be commiting my unit of work. On purpose, I've not mentioned what type of Respoistory i'm using - eg. Linq-To-Sql, Entity Framework 4, NHibernate, etc. If someone knows where, can they please explain WHY they have said, where? (i'm trying to understand the pattern through example(s), as opposed to just getting my code to work). Here's what i've got :- using ( TransactionScope transactionScope = new TransactionScope ( TransactionScopeOption.RequiresNew, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted } ) ) { _logEntryRepository.InsertOrUpdate(logEntry); //_unitOfWork.Commit(); // Here, commit #1 ? // Now, if this log entry was a NewConnection or an LostConnection, // then we need to make sure we update the ConnectedClients. if (logEntry.EventType == EventType.NewConnection) { _connectedClientRepository.Insert( new ConnectedClient { LogEntryId = logEntry.LogEntryId }); //_unitOfWork.Commit(); // Here, commit #2 ? } // A (PB) BanKick does _NOT_ register a lost connection, // so we need to make sure we handle those scenario's as a LostConnection. if (logEntry.EventType == EventType.LostConnection || logEntry.EventType == EventType.BanKick) { _connectedClientRepository.Delete( logEntry.ClientName, logEntry.ClientIpAndPort); //_unitOfWork.Commit(); // Here, commit #3 ? } _unitOfWork.Commit(); // Here, commit #4 ? transactionScope.Complete(); }

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • PHP Commercial Project Function define

    - by Shiro
    Currently I am working with a commercial project with PHP. I think this question not really apply to PHP for all programming language, just want to discuss how your guys solve it. I work in MVC framework (CodeIgniter). all the database transaction code in model class. Previously, I seperate different search criteria with different function name. Just an example function get_student_detail_by_ID($id){} function get_student_detail_by_name($name){} as you can see the function actually can merge to one, just add a parameter for it. But something you are rushing with project, you won't look back previously got what similar function just make some changes can meet the goal. In this case, we found that there is a lot function there and hard to maintenance. Recently, we try to group the entity to one ultimate search something like this function get_ResList($is_row_count=FALSE, $record_start=0, $arr_search_criteria='', $paging_limit=20, $orderby='name', $sortdir='ASC') we try to make this function to fit all the searching criteria. However, our system getting bigger and bigger, the search criteria not more 1-2 tables. It require join with other table with different purpose. What we had done is using IF ELSE, if(bla bla bla) { $sql_join = JOIN_SOME_TABLE; $sql_where = CONDITION; } at the end, we found that very hard to maintance the function. it is very hard to debug as well. I would like to ask your opinion, what is the commercial solution they solve this kind of issue, how to define a function and how to revise it. I think this is link project management skill. Hope you willing to share with us. Thanks.

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >