Search Results

Search found 800 results on 32 pages for 'locks'.

Page 8/32 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Prevent the "System" process from locking my files in a shared folder.

    - by Kamarey
    I have an application that creates files to be processed by SQL bulk. The files are created in shared folder on another server and than taken from there by SQL. The problem that sometime SQL returns an error, that the file is locked by another process and can't be accessed. The process that locks these files is "System" process. Looks like it lock files because of they are in a shared folder, but not sure. The use of any software to unlock files manually is not an option, as all bulk process is automatic. The question is: Why the "System" process locks these files and is there a way to prevent this?

    Read the article

  • Network card dies when trying to wake up laptop from sleep

    - by Bugmaster
    I have a Dell XPS 1640 laptop (aka Dell Studio XPS 16), running Vista. Whenever the laptop wakes up from sleep, my Broadcomm LAN card immediately dies. Any attempt to access it completely locks up whichever program I use to access it. There appears to be no way to re-awaken the card, other than rebooting; note that Vista does not even shut down properly if some program is waiting for the card. So far, I have tried the following: Disabled the "allow your computer to shut down this device to save power" in the device properties. This had no effect. Attempted to disable and then enable the card in the device properties. No effect. Tried ipconfig /renew: this locks up ipconfig to the point where I can't even kill its process. Force-install updated drivers from broadcomm: no effect. Does anyone have any other ideas ?

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • A deadlock was detected while trying to lock variables in SSIS

    Error: 0xC001405C at SQL Log Status: A deadlock was detected while trying to lock variables "User::RowCount" for read/write access. A lock cannot be acquired after 16 attempts. The locks timed out. Have you ever considered variable locking when building your SSIS packages? I expect many people haven’t just because most of the time you never see an error like the one above. I’ll try and explain a few key concepts about variable locking and hopefully you never will see that error. First of all, what is all this variable locking all about? Put simply SSIS variables have to be locked before they can be accessed, and then of course unlocked once you have finished with them. This is baked into SSIS, presumably to reduce the risk of race conditions, but with that comes some additional overhead in that you need to be careful to avoid lock conflicts in some scenarios. The most obvious place you will come across any hint of locking (no pun intended) is the Script Task or Script Component with their ReadOnlyVariables and ReadWriteVariables properties. These two properties allow you to enter lists of variables to be used within the task, or to put it another way, these lists of variables to be locked, so that they are available within the task. During the task pre-execute phase the variables and locked, you then use them during the execute phase when you code is run, and then unlocked for you during the post-execute phase. So by entering the variable names in one of the two list, the locking is taken care of for you, and you just read and write to the Dts.Variables collection that is exposed in the task for the purpose. As you can see in the image above, the variable PackageInt is specified, which means when I write the code inside that task I don’t have to worry about locking at all, as shown below. public void Main() { // Set the variable value to something new Dts.Variables["PackageInt"].Value = 199; // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } As you can see as well as accessing the variable, hassle free, I also raise an event. Now consider a scenario where I have an event hander as well as shown below. Now what if my event handler uses tries to use the same variable as well? Well obviously for the point of this post, it fails with the error quoted previously. The reason why is clearly illustrated if you consider the following sequence of events. Package execution starts Script Task in Control Flow starts Script Task in Control Flow locks the PackageInt variable as specified in the ReadWriteVariables property Script Task in Control Flow executes script, and the On Information event is raised The On Information event handler starts Script Task in On Information event handler starts Script Task in On Information event handler attempts to lock the PackageInt variable (for either read or write it doesn’t matter), but will fail because the variable is already locked. The problem is caused by the event handler task trying to use a variable that is already locked by the task in Control Flow. Events are always raised synchronously, therefore the task in Control Flow that is raising the event will not regain control until the event handler has completed, so we really do have un-resolvable locking conflict, better known as a deadlock. In this scenario we can easily resolve the problem by managing the variable locking explicitly in code, so no need to specify anything for the ReadOnlyVariables and ReadWriteVariables properties. public void Main() { // Set the variable value to something new, with explicit lock control Variables lockedVariables = null; Dts.VariableDispenser.LockOneForWrite("PackageInt", ref lockedVariables); lockedVariables["PackageInt"].Value = 199; lockedVariables.Unlock(); // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } Now the package will execute successfully because the variable lock has already been released by the time the event is raised, so no conflict occurs. For those of you with a SQL Engine background this should all sound strangely familiar, and boils down to getting in and out as fast as you can to reduce the risk of lock contention, be that SQL pages or SSIS variables. Unfortunately we cannot always manage the locking ourselves. The Execute SQL Task is very often used in conjunction with variables, either to pass in parameter values or get results out. Either way the task will manage the locking for you, and will fail when it cannot lock the variables it requires. The scenario outlined above is clear cut deadlock scenario, both parties are waiting on each other, so it is un-resolvable. The mechanism used within SSIS isn’t actually that clever, and whilst the message says it is a deadlock, it really just means it tried a few times, and then gave up. The last part of the error message is actually the most accurate in terms of the failure, A lock cannot be acquired after 16 attempts. The locks timed out.  Now this may come across as a recommendation to always manage locking manually in the Script Task or Script Component yourself, but I think that would be an overreaction. It is more of a reminder to be aware that in high concurrency scenarios, especially when sharing variables across multiple objects, locking is important design consideration. Update – Make sure you don’t try and use explicit locking as well as leaving the variable names in the ReadOnlyVariables and ReadWriteVariables lock lists otherwise you’ll get the deadlock error, you cannot lock a variable twice!

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • SQL Server deadlocks between select/update or multiple selects

    - by RobW
    All of the documentation on SQL Server deadlocks talks about the scenario in which operation 1 locks resource A then attempts to access resource B and operation 2 locks resource B and attempts to access resource A. However, I quite often see deadlocks between a select and an update or even between multiple selects in some of our busy applications. I find some of the finer points of the deadlock trace output pretty impenetrable but I would really just like to understand what can cause a deadlock between two single operations. Surely if a select has a read lock the update should just wait before obtaining an exclusive lock and vice versa? This is happening on SQL Server 2005 not that I think this makes a difference.

    Read the article

  • Unlocking SVN working copy with unversioned resources

    - by Vijay Dev
    I have a repository which contains some unversioned directories and files. The server running svn was recently changed and since the checkout was done using the url svn://OLD-IP, I relocated my svn working copy, this time to the url svn://NEW-DOMAIN-NAME. Now since there are some unversioned resources, the switch did not happen properly and the working copy got locked. A cleanup operation did not work either because of these unversioned resources. I looked up in the net and found about svn ignore and tried that but to no use. I am unable to release all locks. Any ideas on solving the problem? Once I release the locks, I believe I can use svn ignore and carry on the relocate operation.

    Read the article

  • Java Concurrency: CAS vs Locking

    - by Hugo Walker
    Im currently reading the Book Java Concurrency in Practice. In the Chapter 15 they are speaking about the Nonblocking algorithms and the compare-and-swap (CAS) Method. It is written that the CAS perform much better than the Locking Methods. I want to ask the people which already worked with both of this concepts and would like to hear when you are preferring which of these concept? Is it really so much faster? Personally for me the usage of Locks is much clearer and easier to understand and maybe even better to maintain. (Please correct me if I am wrong). Should we really focus creating our concurrent code related on CAS than Locks to get a better performance boost or is sustainability a higher thing? I know there is maybe not a strict rule, when to use what. But I just would like to hear some opinions, experiences with the new concept of CAS.

    Read the article

  • Is using ReaderWriterLockSlim a bad idea for long lived objects?

    - by uriDium
    I am trying to track down the reason that an application has periods of bad performance. I think that I have linked the bad performance to the points where Garbage Collection is run for Gen 2. I get a profiling tool (CLR Profiler) and was quite surprised by the results. In my test I was spawning and processing millions of objects. However the biggest hog of the Gen 2 space comes from something Called Threading.ReaderWriterCount which comes from System.Threading.ReaderWriterLockSlim::InitializeThreadCounts. I know nothing about the inner workings of ReaderWriterLockSlim but from what I am getting from the reports it is okay to have 1 or 2 Locks for longer lived objects but try and use other locks if you are going to have many smaller objects. Does anyone have any comments or experience with ReaderWriterLockSlim and/or what to look for if it seems that GC is killing application performance?

    Read the article

  • Performance Counters in Server Development

    - by Mubashar Ahmad
    Dear Gurus All you be agree with the value and worth of Performance Counters while developing and maintaining a server kind application I would like to know what is the best way to implement those, Specifically using C#? Usually performance counters have the following attributes They are shared global Writing requires locks to ensure Synchronization Reading Some times requires locks too. Is it better to update them Asynchronously and what is the best way to make them so. (I am planning to use the ThreadPool.QueuWorketItem function, pls tell me you opinion on this too.) If my question seems a bit vague can you just take the example of a HelloWorld Wcf service and i wanted to know following how many times its being hit overall and within a certain period Average/min/max Response Times overall and within a certain period. Moreover if any one knows about the Specialized way provided by DotNet or WCF then please let me know as well.

    Read the article

  • DataReader Behaviour With SQL Server Locking

    - by Graham
    We are having some issues with our data layer when large datasets are returned from a SQL server query via a DataReader. As we use the DataReader to populate business objects and serialize them back to the client, the fetch can take several minutes (we are showing progress to the user :-)), but we've found that there's some pretty hard-core locking going on on the affected tables which is causing other updates to be blocked. So I guess my slightly naive question is, at what point are the locks which are taken out as a result of executing the query actually relinquished? We seem to be finding that the locks are remaining until the last row of the DataReader has been processed and the DataReader is actually closed - does that seem correct? A quick 101 on how the DataReader works behind the scenes would be great as I've struggled to find any decent information on it. I should say that I realise the locking issues are the main concern but I'm just concerned with the behaviour of the DataReader here.

    Read the article

  • Python - question regarding the concurrent use of `multiprocess`.

    - by orokusaki
    I want to use Python's multiprocessing to do concurrent processing without using locks (locks to me are the opposite of multiprocessing) because I want to build up multiple reports from different resources at the exact same time during a web request (normally takes about 3 seconds but with multiprocessing I can do it in .5 seconds). My problem is that, if I expose such a feature to the web and get 10 users pulling the same report at the same time, I suddenly have 60 interpreters open at the same time (which would crash the system). Is this just the common sense result of using multiprocessing, or is there a trick to get around this potential nightmare? Thanks

    Read the article

  • Snapshot on, still deadlocks, ROWLOCK

    - by Patto
    I turned snapshot isolation on in my database using the following code ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON and got rid off lots of deadlocks. But my database still produces deadlocks, when I need to run a script every hour to clean up 100,000+ rows. Is there a way I can avoid deadlocks in my cleanup script, do I need to set ROWLOCK specifically in that query? Is there a way to increase the number of row level locks that a database uses? How are locks promoted? From row level to page level to table level? My delete script is rather simple: delete statvalue from statValue, (select dateadd(minute,-60, getdate()) as cutoff_date) cd where temporaryStat = 1 and entrydate < cutoff_date Right now I am looking for quick solution, but a long term solution would be even nicer. Thanks a lot, Patrikc

    Read the article

  • boost scoped_lock mutex crashes

    - by JahSumbar
    hello, I have protected a std::queue's access functions, push, pop, size, with boost::mutexes and boost::mutex::scoped_lock in these functions from time to time it crashes in a scoped lock the call stack is this: 0 0x0040f005 boost::detail::win32::interlocked_bit_test_and_set include/boost/thread/win32/thread_primitives.hpp 361 1 0x0040e879 boost::detail::basic_timed_mutex::timed_lock include/boost/thread/win32/basic_timed_mutex.hpp 68 2 0x0040e9d3 boost::detail::basic_timed_mutex::lock include/boost/thread/win32/basic_timed_mutex.hpp 64 3 0x0040b96b boost::unique_lock<boost::mutex>::lock include/boost/thread/locks.hpp 349 4 0x0040b998 unique_lock include/boost/thread/locks.hpp 227 5 0x00403837 MyClass::inboxSize - this is my inboxSize function that uses this code: MyClass::inboxSize () { boost::mutex::scoped_lock scoped_lock(m_inboxMutex); return m_inbox.size(); } and the mutex is declared like this: boost::mutex m_inboxMutex; it crashes at the last pasted line in this function: inline bool interlocked_bit_test_and_set(long* x,long bit) { long const value=1<<bit; long old=*x; and x has this value: 0xababac17 Thanks for the help

    Read the article

  • How to debug ConcurrentModificationException?

    - by Dani
    I encountered ConcurrentModificationException and by looking at it I can't see the reason why it's happening; the area throwing the exception and all the places modifying the collection are surrounded by synchronized (this.locks.get(id)) { ... } // locks is a HashMap<String, Object>; I tried to catch the the pesky thread but all I could nail (by setting a breakpoint in the exception) is that the throwing thread owns the monitor while the other thread (there are two threads in the program) sleeps. How should I proceed? What do you usually do when you encounter similar threading issues?

    Read the article

  • Strange Locking Behaviour in SQL Server 2005

    - by SQL Learner
    Can anyone please tell me why does the following statement inside a given stored procedure returns repeated results even with locks on the rows used by the first SELECT statement? BEGIN TRANSACTION DECLARE @Temp TABLE ( ID INT ) INSERT INTO @Temp SELECT ID FROM SomeTable WITH (ROWLOCK, UPDLOCK, READPAST) WHERE SomeValue <= 10 INSERT INTO @Temp SELECT ID FROM SomeTable WITH (ROWLOCK, UPDLOCK, READPAST) WHERE SomeValue >= 5 SELECT * FROM @Temp COMMIT TRANSACTION Any values in SomeTable for which SomeValue is between 5 and 10 will be returned twice, even though they were locked in the first SELECT. I thought that locks were in place for the whole transaction, and so I wasn't expecting the query to return repeated results. Why is this happening?

    Read the article

  • What are these folders for? Can I remove them? How?

    - by Water Cooler v2
    In a folder on the SVN server/repository that is designated for our project, there have appeared the following folders: branches/ conf/ db/ hooks/ locks/ tags/ trunk/ README.txt (file) format (file) We have all the code in the trunk folder. There were, as far as I can remember, only 3 or 4 folders earlier. Within the trunk folder, too, there are now these folders. OurCode/ conf/ db/ hooks/ locks/ README.txt (file) format (file) I understand many of these folders or files are not necessary, but I can't be too sure. My questions are: 1) What are each of these files and/or folders for? 2) Which are the ones that are not necessary? 3) How may I remove them from the server repository?

    Read the article

  • Optimal strategy to make a C++ hash table, thread safe

    - by Ajeet
    (I am interested in design of implementation NOT a readymade construct that will do it all.) Suppose we have a class HashTable (not hash-map implemented as a tree but hash-table) and say there are eight threads. Suppose read to write ratio is about 100:1 or even better 1000:1. Case A) Only one thread is a writer and others including writer can read from HashTable(they may simply iterate over entire hash table) Case B) All threads are identical and all could read/write. Can someone suggest best strategy to make the class thread safe with following consideration 1. Top priority to least lock contention 2. Second priority to least number of locks My understanding so far is thus : One BIG reader-writer lock(semaphore). Specialize the semaphore so that there could be eight instances writer-resource for case B, where each each writer resource locks one row(or range for that matter). (so i guess 1+8 mutexes) Please let me know if I am thinking on the correct line, and how could we improve on this solution.

    Read the article

  • Can't work out security

    - by user215351
    I installed Ubuntu on puter I am the only user I alone use. I was trying a to find out how to repair hardware faults. Surprised to find I was not the owner and that there is a password that locks me out. I only set one password during set up so what is this mysterious password. As far as I'm concerned it is overdone on security, Im sick of authenticating every 3 seconds. I need a simpler system

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 24 (sys.dm_db_index_operational_stats)

    - by Tamarick Hill
    The sys.dm_db_index_operational_stats Dynamic Management Function returns information about the IO, locking, and access methods for the indexes that you currently have on your SQL Server Instance. This function takes four input parameters which are (1) database_id, (2) object_id, (3) index_id, and (4) partition_number. Let’s have a look at the results from this function against our AdventureWorks2012 database. This function returns a ton of columns, so not only will I not attempt to describe each of the columns, I wont even attempt to display all of them here. My query below will give you a subset of the columns returned from this function. SELECT database_id, object_id, index_id, partition_number, leaf_insert_count, leaf_delete_count, leaf_update_count, leaf_ghost_count, nonleaf_insert_count, nonleaf_delete_count, nonleaf_update_count, range_scan_count, forwarded_fetch_count, row_lock_count, row_lock_wait_count, page_lock_count, page_lock_wait_count, Index_lock_promotion_attempt_count, index_lock_promotion_count, page_compression_attempt_count, page_compression_success_count FROM sys.dm_db_index_operational_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL) The first four columns in the result set represent the values that we passed in as our input parameters. If you use NULL’s as I did, then you will see results for every index on your system. I specified a database_id so my result set only shows those records pertaining to my AdventureWorks2012 database. The next columns in the result set provide you with information on how may inserts, deletes, or updates that have taken place on your leaf and nonleaf index levels. The nonleaf levels would refer to the intermediate and root index levels. In the middle of these you see a leaf_ghost_count column, which represents the number of records that have been logically deleted and marked as “ghosted”  and are waiting on the background ghost cleanup process to physically remove them. The range_scan_count column represents the number of range or table scans that have been performed against an index. The forwarded_fetch_count column represents the number of rows that were returned from a forwarding row pointer. The row_lock_count and row_lock_wait_count represent the number of row locks that have been requested for an index and the number of times SQL has had to wait on a row lock respectively. The page_lock_count and page_lock_wait_count represent the number of page locks that have been requested for an index and the number of times SQL has had to wait on a page lock respectively. The index_lock_promotion_attempt_count represents the number of times the database engine has attempted to promote a lock to the index level. The index_lock_promotion_count column displays how many times that index lock promotion was successful. Lastly the page_compression_attempt_count and page_compression_success_count represents how many times a page was attempted to be compressed and how many times the attempt was successful. As you can see there is a ton of information returned from this DMV. The DMV we reviewed on yesterday (sys.dm_db_index_usage_stats) provided you with good information on when and how indexes have been used, but this DMF takes an even deeper dive into these statistics. If you are interested in performing a very detailed analysis on the operational stats of your indexes, this is not only a good place to start, but more than likely the best place. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174281.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Automatic login vs. manual login and screensaver lock

    - by Erik Johansson
    Is there a way to prevent a command from running when I login manually, but having it run when the computer starts up and GDM automatically logs me in. This is the setup: in the Gnome "on start programs" settings I have a command that locks the screen gnome-screensaver-command -l I have automatic login turned on. That means that the screen will be locked when I turn on the computer, but it will also be locked when I manually login from GDM, is there a way to prevent this?

    Read the article

  • Acer Aspire One and Kernel 2.6.35-25 Freeze

    - by Nerdfest
    I'm having a problem with an Acer Aspire One netbook after the latest kernel upgrade. Basically, doing anything relating to an external monitor locks the trackpad, and in some cases, the keyboard as well. This lock will continue in Gnome even after reboots, and requires battery removal to fix. It does work in the graphical login manager up until the problem occurs the first time. And ideas on settings, etc, that I can change to make it work again?

    Read the article

  • Why does switching users completely hang my system every time?

    - by Stéphane
    I have a fresh install of 11.04 64bit, with 2 administrator accounts and 4 normal accounts. The 4 normal accounts (the kids' accounts) don't have passwords, they can login simply by clicking on their names. When any of the users -- either admin or normal -- tries to switch to another account by clicking in the top-right corner of the screen and selecting another user, the screen goes black and the entire system locks up. Even CTRL+ALT+F1 through F7 does nothing. This is reproducible 100% of the time on this system. I can ssh into the box when the console locks up, and by running top, I see that Xorg is consuming about 100% of the CPU. Looking at the output of "ps axfu" in bash while the system is in this "locked up" state, here is the lightdm and X process tree: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1153 0.0 0.1 183508 4292 ? Ssl Dec26 0:00 lightdm root 2187 0.4 4.6 265976 164168 tty7 Ss+ 00:43 0:21 \_ /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch stephane 2612 0.0 0.3 266400 10736 ? Ssl 01:52 0:00 \_ /usr/bin/gnome-session --session=ubuntu stephane 2650 0.0 0.0 12264 276 ? Ss 01:52 0:00 | \_ /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session /usr/bin/gnome-session --session=ubuntu stephane 2703 0.8 3.0 562068 106548 ? Sl 01:52 0:08 | \_ compiz stephane 2801 0.0 0.0 4264 584 ? Ss 01:52 0:00 | | \_ /bin/sh -c /usr/bin/compiz-decorator stephane 2802 0.0 0.3 265744 13772 ? Sl 01:52 0:00 | | \_ /usr/bin/unity-window-decorator ...cut... root 3024 80.6 0.3 107928 13088 tty8 Rs+ 01:53 12:34 \_ /usr/bin/X :1 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch That last process, pid #3024 in this case, is what has the CPU pegged. In case it matters (I suspect it might) here is what I think may be the relevant information for my video card, taken from /var/log/Xorg.0.log: [ 3392.653] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so [ 3392.653] (II) Module glx: vendor="FireGL - AMD Technologies Inc." [ 3392.653] compiled for 6.9.0, module version = 1.0.0 ... [ 3392.655] (II) LoadModule: "fglrx" [ 3392.655] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so [ 3392.672] (II) Module fglrx: vendor="FireGL - ATI Technologies Inc." [ 3392.672] compiled for 1.4.99.906, module version = 8.88.7 [ 3392.672] Module class: X.Org Video Driver ... [ 3392.759] (==) fglrx(0): ATI 2D Acceleration Architecture enabled [ 3392.759] (--) fglrx(0): Chipset: "AMD Radeon HD 6410D" (Chipset = 0x9644) Lastly: I did see this posting: Change user on 11.10 hangs system ...but I checked, and the libpam-smbpass package isn't installed on this system.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >