Search Results

Search found 1137 results on 46 pages for 'optimistic locking'.

Page 18/46 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • Persisting complex data between postbacks in ASP.NET MVC

    - by Robert Wagner
    I'm developing an ASP.NET MVC 2 application that connects to some services to do data retrieval and update. The services require that I provide the original entity along with the updated entity when updating data. This is so it can do change tracking and optimistic concurrency. The services cannot be changed. My problem is that I need to somehow store the original entity between postbacks. In WebForms, I would have used ViewState, but from what I have read, that is out for MVC. The original values do not have to be tamper proof as the services treat them as untrusted. The entities would be (max) 1k and it is an intranet app. The options I have come up are: Session - Ruled out - Store the entity in the Session, but I don't like this idea as there are no plans to share session between URL - Ruled out - Data is too big HiddenField - Store the serialized entity in a hidden field, perhaps with encryption/encoding HiddenVersion - The entities have a (SQL) version field on them, which I could put into a hidden field. Then on a save I get "original" entity from the services and compare the versions, doing my own optimistic concurrency. Cookies - Like 3 or 4, but using a cookie instead of a hidden field I'm leaning towards option 4, although 3 would be simpler. Are these valid options or am I going down the wrong track? Is there a better way of doing this?

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • TCPlistener.BeginAcceptSocket - async question

    - by Mirek
    Hi, Some time ago I have payed to a programmer for doing multithread server. In the meantime I have learned C# a bit and now I think I can see the slowndown problem - I was told by that guy that nothing is processed on the main thread (Form) so it cannot be frozen..but it is. But I think that altough BeginAcceptSocket is async operation, but its callback runs on the main thread and if there is locking, thats the reason why the app freezes. Am I right? Thanks this.mTcpListener.BeginAcceptSocket(this.AcceptClient, null); protected void AcceptClient(IAsyncResult ar) { //some locking stuff }

    Read the article

  • iPhone Orientation Relayout From Single Column to Double Column

    - by kkrizka
    I am trying to create a UIView in Interface Builder that shows to the user two boxes containing some text. This UIView should support both landscape and portrait modes. When in portrait orientation, the two boxes should be centered horizontally and be under each other. Like in the picture below: But when in landscape orientation, it should show the two boxes centered vertically and by side by side. Like in the picture below: Is this possible using only the autosizing options (or any other IB options), or do I have to relayout the view in code on orientation change events? I would prefer using only IB. I tried locking the top and left margins of the top box and locking the bottom and right margins of the bottom box. But the problem is that for it to work I also need to shrink the two boxes as one changes from portrait to landscape, because otherwise they would overlap.

    Read the article

  • Is there any benefit to encrypting twice using pgp?

    - by ojblass
    I am asking from a "more secure" perspective. I can imagine a scenario with two required private keys needed for decryption scenarios that may make this an attractive model. This is to settle an argument. My vote is that it is not adding any additional security other than having to compromise two different private keys. I think that if it was any more secure than encrypting it one million times would be the best way to secure informaiton and I don't buy it. So I guess my question becomes is a two locking mechanism equivalent to another one locking mechanism with a single key? Update: Forgive me if the answer is obvious but my bread goes dead as I read books on the topic.

    Read the article

  • What's the performance penalty of weak_ptr?

    - by Kornel Kisielewicz
    I'm currently designing a object structure for a game, and the most natural organization in my case became a tree. Being a great fan of smart pointers I use shared_ptr's exclusively. However, in this case, the children in the tree will need access to it's parent (example -- beings on map need to be able to access map data -- ergo the data of their parents. The direction of owning is of course that a map owns it's beings, so holds shared pointers to them. To access the map data from within a being we however need a pointer to the parent -- the smart pointer way is to use a reference, ergo a weak_ptr. However, I once read that locking a weak_ptr is a expensive operation -- maybe that's not true anymore -- but considering that the weak_ptr will be locked very often, I'm concerned that this design is doomed with poor performance. Hence the question: What is the performance penalty of locking a weak_ptr? How significant is it?

    Read the article

  • how to "lock" live site when doing (phing) deployment

    - by Jorre
    On http://www.slideshare.net/eljefe/automated-deployment-with-phing in slide 15 they are talking about "locking the live site" when doing deployment. We are running multiple webshops in a SAAS application where it is possible that users are adding products, buying products and paying for products online, and so on... When deploying we want to do this as clean as possible, so that not payments or orders or other critical data will be lost. We have a deployment scenario set up using phing (amazing tool!) but we are missing one crucial step being the "locking of the live site" while deploying. What is a possible way to lock a live site and bring it back online after deploying?

    Read the article

  • Why boost::recursive_mutex is not working as expected?

    - by Kjir
    I have a custom class that uses boost mutexes and locks like this (only relevant parts): template<class T> class FFTBuf { public: FFTBuf(); [...] void lock(); void unlock(); private: T *_dst; int _siglen; int _processed_sums; int _expected_sums; int _assigned_sources; bool _written; boost::recursive_mutex _mut; boost::unique_lock<boost::recursive_mutex> _lock; }; template<class T> FFTBuf<T>::FFTBuf() : _dst(NULL), _siglen(0), _expected_sums(1), _processed_sums(0), _assigned_sources(0), _written(false), _lock(_mut, boost::defer_lock_t()) { } template<class T> void FFTBuf<T>::lock() { std::cerr << "Locking" << std::endl; _lock.lock(); std::cerr << "Locked" << std::endl; } template<class T> void FFTBuf<T>::unlock() { std::cerr << "Unlocking" << std::endl; _lock.unlock(); } If I try to lock more than once the object from the same thread, I get an exception (lock_error): #include "fft_buf.hpp" int main( void ) { FFTBuf<int> b( 256 ); b.lock(); b.lock(); b.unlock(); b.unlock(); return 0; } This is the output: sb@dex $ ./src/test Locking Locked Locking terminate called after throwing an instance of 'boost::lock_error' what(): boost::lock_error zsh: abort ./src/test Why is this happening? Am I understanding some concept incorrectly?

    Read the article

  • Why timed lock doesnt throws a timeout exception in C++0x?

    - by Vicente Botet Escriba
    C++0x allows to lock on a mutex until a given time is reached, and return a boolean stating if the mutex has been locked or not. template <class Clock, class Duration> bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time); In some contexts, I consider an exceptional situation that the locking fails because of timeout. In this case an exception should be more appropriated. To make the difference a function lock_until could be used to get a timeout exception when the time is reached before locking. template <class Clock, class Duration> void lock_until(const chrono::time_point<Clock, Duration>& abs_time); Do you think that lock_until should be more adequate in some contexts? if yes, on which ones? If no, why try_lock_until will always be a better choice?

    Read the article

  • Will lock() statement block all threads in the proccess/appdomain?

    - by MikeJ
    Maybe the question sounds silly, but I don't understand 'something about threads and locking and I would like to get a confirmation (here's why I ask). So, if I have 10 servers and 10 request in the same time come to each server, that's 100 request across the farm. Without locking, thats 100 request to the database. If I do something like this: private static readonly object myLockHolder = new object(); if (Cache[key] == null) { lock(myLockHolder) { if (Cache[key] == null) { Cache[key] = LengthyDatabaseCall(); } } } How many database requests will I do? 10? 100? Or as much as I have threads?

    Read the article

  • SQL Server concurrency and generated sequence

    - by Goyuix
    I need a sequence of numbers for an application, and I am hoping to leverage the abilities of SQL Server to do it. I have created the following table and procedure (in SQL Server 2005): CREATE TABLE sequences ( seq_name varchar(50) NOT NULL, seq_value int NOT NULL ) CREATE PROCEDURE nextval @seq_name varchar(50) AS BEGIN DECLARE @seq_value INT SET @seq_value = -1 UPDATE sequences SET @seq_value = seq_value = seq_value + 1 WHERE seq_name = @seq_name RETURN @seq_value END I am a little concerned that without locking the table/row another request could happen concurrently and end up returning the same number to another thread or client. This would be very bad obviously. Is this design safe in this regard? Is there something I can add that would add the necessary locking to make it safe? Note: I am aware of IDENTITY inserts in SQL Server - and that is not what I am looking for this in particular case. Specifically, I don't want to be inserting/deleting rows. This is basically to have a central table that manages the sequential number generator for a bunch of sequences.

    Read the article

  • DiscountASP.NET Launches SQL Server Profiling as a Service

    - by wisecarver
    DiscountASP.NET announces enhancing our SQL Server hosting with the launch of SQL Server Profiling as a service. SQL Profiler is a powerful tool that allows the application and database developer to troubleshoot general SQL locking problems, performance issues, and perform database tuning. With our SQL Profiling as a Service customers can schedule a database trace at a specific time of their choosing and offers a new way to help our customers troubleshoot. For more information, visit: http://www...(read more)

    Read the article

  • DiscountASP.NET Launches SQL Server Profiling as a Service

    - by wisecarver
    DiscountASP.NET announces enhancing our SQL Server hosting with the launch of SQL Server Profiling as a service. SQL Profiler is a powerful tool that allows the application and database developer to troubleshoot general SQL locking problems, performance issues, and perform database tuning. With our SQL Profiling as a Service customers can schedule a database trace at a specific time of their choosing and offers a new way to help our customers troubleshoot. For more information, visit: http://www...(read more)

    Read the article

  • Autocad on linux ubuntu 11.10!

    - by gabriel
    I am trying 3 years now installing autocad,3ds max and revit architecture on ubuntu with the help of wine!Every year i am very optimistic cause i see the new wine versions already improved.So, now i am starting again in a clean ubuntu install to install the autocad 2013 with the wine version wine1.4.I am not trying to have an answer only for me but i want all this ubuntu community try for this and finally we can achieve that!The winetricks have already net framework 4 to install which is the reason i have not already ran in the pas autocad.So, i would like to remove completely my windows 7 partition from my pc and go on a linux machine without loosing the powerfull architectural programms.I know all about blender and staff so i just want you to help find a solution on that because i know there is a solution!Maybe i will have to learn all the c++ or python etc staff.But i am sure that a solution can come with the help of all of us!Any suggestion about this problem will be very nice and helpfull. Thanks in advance! Gabriel

    Read the article

  • App Store: Profitability for Game Developers

    - by Bunkai.Satori
    Recent days, I've been spending significant time in discovering chances of profitability of AppStore for developers. I have found many articles. Some of them are highly optimistic, while other are extremely skeptical. This article is extremely skeptical. It even claims to have backed its conclusions by objective sales numbers. This is another pesimistic article saying that games developed by single individuals get 20 downloads a day. Can I kindly ask to clarify from business viewpoint whether average developers publishing games and software on AppStore can cover their living expenses, even, whether they can become profitable? Is it achievable to generate revenues of 50.000 USD yearly on AppStore for a single developer? I would like to stay as realistic as possible. Despite the question might look subjective, a good business man will be able to esitmate chances for profitability and prosperity within AppStore.

    Read the article

  • App Store: Profitability for Game Developers

    - by Bunkai.Satori
    Recent days, I've been spending significant time in discovering chances of profitability of AppStore for developers. I have found many articles. Some of them are highly optimistic, while other are extremely skeptical. This article is extremely skeptical. It even claims to have backed its conclusions by objective sales numbers. This is another pesimistic article saying that games developed by single individuals get 20 downloads a day. Can I kindly ask to clarify from business viewpoint whether average developers publishing games and software on AppStore can cover their living expenses, even, whether they can become profitable? Is it achievable to generate revenues of 50.000 USD yearly on AppStore for a single developer? I would like to stay as realistic as possible. Despite the question might look subjective, a good business man will be able to esitmate chances for profitability and prosperity within AppStore.

    Read the article

  • How your Standard can become AWEsome

    - by NeilHambly
    Having tried to make a fun play on words to illustrate that for Standard Editions of SQL Server 2005/2008 since the releases of these Cumulative Updates: SQL 2005 SP3 & CU4 / SQL 2008 SP1 & CU2 we can make real use of AWE! Since (Mid 2009) when these CU’s where released, the ability to make use of required privilege “locking-pages-in-memory” which previously was only available in Enterprise Edition, allowing us to make use of those AWE APIs for resolving working set trim issues that resulted...(read more)

    Read the article

  • Sets, Surrogates, Normalisation, Referential Integrity - the Theory with example Scaling considerati

    - by tonyrogerson
    The Slides and Demo's for the SQLBits session I did today at SQL Bits in London are attached. The Agenda was... Thinking in Sets Surrogate Keys ú What they are ú Comparison NEWID, NEWSEQUENTIALID, IDENTITY ú Fragmenation Normalisation ú An introduction – what is it? Why use it? ú Joins – Pre-filter problems, index intersection ú Fragmentation again Referential Integrity ú Optimiser -> Query rewrite ú Locking considerations around Foreign Keys and Declarative RI (using Triggers)...(read more)

    Read the article

  • Does learning to develop for iOS create a lock-in?

    - by Jungle Hunter
    If I begin my career (first job) with developing on the iOS platform, does that lock me in into iOS and Mac OS X development only? By locking me in I mean will that create barriers for me to switch technologies as I would be mainly working with Objective-C. If yes, does that make my career choices limited? I'm interested in comparing this with Android development, which if pursued will leave me with Java skills (correct me if I'm wrong) which I can use elsewhere.

    Read the article

  • How do I let customers run arbitrary code as securely as possible?

    - by Tyler
    I'd like to offer a service where customers can write arbitrary java code, send it to me, and I'll run it for them on Amazon EC2. My question is: how can I do this without exposing one customer's data to another customer? Right now I'm thinking that each customer can be sandboxed as their own OS-level user with restricted permissions. Is that good enough? I understand that this is a tricky issue, but it seems to be one that many people, such as the designers of multi-user OS's and Amazon themselves are solving, so I am optimistic that there might be a good approach.

    Read the article

  • The Recovery: New Challenges for your Supply Chain!

    - by [email protected]
    Nearly half of CFOs are planning to reduce their inventory during the first half of 2010 in part due to supply chain improvements that allow them to hold less product, but also because of reduced demand according to Kate O'Sullivan, Sr Editor at CFO Magazine. Her view is based on this quarter's Duke University Global Business Outlook Survey. Highlights: Employment will be a drag on the economy- full-time employment to increase by 1%. Temp hiring to grow <1%, Outsourcing 4%.  70% of CFOs at SMEs say credit conditions are worse then 12 mos ago - placing strains on inventory growth Asia and China finance execs are more optimistic than their EMEA or US counterparts and expect stronger growth in capital spending with a 16% gain Source: "Slouching Towards Recovery", CFO Magazine, April 2010, pgs 19-20    

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >