Search Results

Search found 679 results on 28 pages for 'consistency dbcc'.

Page 2/28 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • DBCC CHECKDB Fails Displaying 8914 Error

    MS SQL Server database user might encounter database corruption issues due to improper system shutdown, metadata structure damage, human mistake, and virus infection. In most situations of database c... [Author: Mark Willium - Computers and Internet - May 14, 2010]

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • HD Crash SQL server -> DBCC - consistency errors in table 'sysindexes'

    - by Julian de Wit
    Hello A client of mine has had an HD crash an a SQL DB got corrupt : They did not make backups so they have a big problem. When I tried (an ultimate measure) to DBCC-repair I got the following message. Can anybody help me with this ? Server: Msg 8966, Level 16, State 1, Line 1 Could not read and latch page (1:872) with latch type SH. sysindexes failed. Server: Msg 8944, Level 16, State 1, Line 1 Table error: Object ID 2, index ID 0, page (1:872), row 11. Test (columnOffsets->IsComplex (varColumnNumber) && (ColumnId == COLID_HYDRA_TEXTPTR || ColumnId == COLID_INROW_ROOT || ColumnId == COLID_BACKPTR)) failed. Values are 2 and 5. The repair level on the DBCC statement caused this repair to be bypassed. CHECKTABLE found 0 allocation errors and 1 consistency errors in table 'sysindexes' (object ID 2). DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    Read the article

  • Windows 7 keeps insisting that it needs to check disk for consistency, but never does

    - by Mike
    Lately Windows 7 has been telling me that I need to check disk D: for consistency. This happens more than 50% of the time when booting up. The first time, I didn't touch anything so that it would go ahead and do its scan. It didn't seem to do anything - just booted straight into Windows. The second time I tried to skip it by pressing any key. It ignored all of my keystrokes and still counted down to 0 (then skipped the disk check). Sometimes, it gets down to 0 but then just hangs... no indication that anything is going on. This is happening on a < 3 month old laptop. C: and D: are on the same physical disk - just two partitions. I never get any notification that C: needs to be checked for consistency. It's a ~300GB HD. C: has 60gb (32gb free) and D: has ~240GB (122gb free). What could be causing this to keep coming up? What can I do to fix it? Thanks!

    Read the article

  • Syntax for DBCC CHECKTABLE on all indexes?

    - by GuinnessFan
    Just want to check the syntax to make sure this is for one table and all indexes (default?). --must be single user ALTER DATABASE database_name SET SINGLE_USER; DBCC CHECKTABLE ( "table_name" , REPAIR_ALLOW_DATA_LOSS ) WITH ALL_ERRORMSGS; -- TURN BACK MULTI USER ALTER DATABASE database_name SET MULTI_USER; Also, should I be in the database containing the table to repair or should I be in master?

    Read the article

  • is there a GOTCHA - DBCC CHECKDB ('DBNAME', NOINDEX)?

    - by Deb Anderson
    I am turning on DBCC CHECKDB in our OLTP environment (SQL 2005,2008). System overhead is a very visible thing on our serversso I want them to be as efficient as it makes sense for them to be. HENCE - I want to turn on the NOINDEX option, an option I've never used before. My thoughts are these: if there is a problem with an index that is detected outside the integrity check, that I can just rebuild the index. Also the duration of the integrity checks will be drastically reduced, and the nastier corruption will be detected. What is the flaw in my plan? Thanks, Deb

    Read the article

  • How to handle set based consistency validation in CQRS?

    - by JD Courtoy
    I have a fairly simple domain model involving a list of Facility aggregate roots. Given that I'm using CQRS and an event-bus to handle events raised from the domain, how could you handle validation on sets? For example, say I have the following requirement: Facility's must have a unique name. Since I'm using an eventually consistent database on the query side, the data in it is not guaranteed to be accurate at the time the event processesor processes the event. For example, a FacilityCreatedEvent is in the query database event processing queue waiting to be processed and written into the database. A new CreateFacilityCommand is sent to the domain to be processed. The domain services query the read database to see if there are any other Facility's registered already with that name, but returns false because the CreateNewFacilityEvent has not yet been processed and written to the store. The new CreateFacilityCommand will now succeed and throw up another FacilityCreatedEvent which would blow up when the event processor tries to write it into the database and finds that another Facility already exists with that name.

    Read the article

  • How do I ensure data consistency in this concurrent situation?

    - by MalcomTucker
    The problem is this: I have multiple competing threads (100+) that need to access one database table Each thread will pass a String name - where that name exists in the table, the database should return the id for the row, where the name doesn't already exist, the name should be inserted and the id returned. There can only ever be one instance of name in the database - ie. name must be unique How do I ensure that thread one doesn't insert name1 at the same time as thread two also tries to insert name1? In other words, how do I guarantee the uniqueness of name in a concurrent environment? This also needs to be as efficient as possible - this has the potential to be a serious bottleneck. I am using MySQL and Java. Thanks

    Read the article

  • I get a consistency y error when I use my second hard drive

    - by Stavros
    I have two hard drives with win8 installed to both of them. Sometimes, when I boot from the second one (with F12 boot-menu and the second drive selection) and later reboot and start my PC from the first one, I get a disk error for a consistency problem. Windows ask for a disk check and after that, I can't anymore boot from the second drive or have access to it. Why this happens and how can I prevent it? What options do I have except from reinstalling windows? Thanks Stavros

    Read the article

  • LSI RAID monitor reports "Consistency Check inconsistency logging disabled"

    - by carlpett
    I have a server with a LSI MegaRAID 9261-8i controller. Recently I started getting alerts like this one: Controller ID: 1 Consistency Check inconsistency logging disabled, too many inconsistencies on VD: 0 Generated on:Sat May 12 04:06:40 2012 SYSTEM DETAILS--- IP Address: 192.168.1.29 OS Name: Windows 7 x64 OS Version: 6.01 Driver Name: megasas.sys Driver Version: 4.5.1.64 IMAGE DETAILS--- BIOS Version: 2.120.33-1197 Firmware Package Version: 12.12.0-0045 Firmware Version: 3.21.00_4.11.05.00_0x05000000 VD 0 is a RAID mirror containing the system disk. I have searched and read, but cannot find any trace of how to actually do anything about this. I tried running a scandisk but that did not find anything (as I expected, since scandisk reads the disks as exposed by the controller, right?). The MegaRAID Storage Manager does not as far as I can see have any options for checking or fixing physical disks. The program claims the VD is "healty", and both disks have Error count 0. Also a bit strange is the System details in the message... The IP address is associated with the RAS (dial in) interface, and the OS should be Windows Server 2011 SBS. Has anyone else experienced this before? What can be done?

    Read the article

  • Repairing inconsistent pages in database

    - by Raj
    We have a SQL 2000 DB. The server crashed due to Raid array failure. Now when we run DBCC CHECKDB, we get an error that there are 27 consistency errors in 9 pages. When we run DBCC PAGE on these pages, we get this: Msg 8939, Level 16, State 106, Line 1 Table error: Object ID 1397580017, index ID 2, page (1:8404521). Test (m_freeCnt == freeCnt) failed. Values are 2 and 19. Msg 8939, Level 16, State 108, Line 1 Table error: Object ID 1397580017, index ID 2, page (1:8404521). Test (emptySlotCnt == 0) failed. Values are 1 and 0. Since the indicated index is non-clustered and is created by a unique constarint that includes 2 columns, we tried dropping and recreating the index. This resulted in the following error: CREATE UNIQUE INDEX terminated because a duplicate key was found for index ID 2. Most significant primary key is '3280'. The statement has been terminated. However running Select var_id,result_on from tests group by var_id,result_on having count(*)>1 returns 0 rows. Here is what we are planning to do: Restore a pre-server crash copy of the DB and run DBCC CHECKDB If that returns clean, then restore again with no recovery Apply all subequent TLOG backups Stop production app, take a tail log backup and apply that too Drop prod DB and rename the freshly restored DB to make it prod Start prod app Could someone please punch holes in this approach? Maybe, suggest a different approach? What we need is minimum downtime. SQL 2000 DB Size 94 GB The table that has corrupt pages has 460 Million+ rows of data Thanks for the help. Raj

    Read the article

  • Oracle transaction read-consistency ?

    - by trojanwarrior3000
    I have problem understanding read consistency in db(oracle). Suppose I am manger of a bank . A customer has got lock (which I dont know) and is doing some updation. Now after he has got lock I am viewing account info of the same customer and try to do some thing on it.But because of read consistency I will see data as it existed before customer got the lock. So will not that affect inputs I am getting and the decisions that I am gonna make during that period?

    Read the article

  • Why do I bother with RAID 10 ?

    - by GrumpyOldDBA
    Before I post anything I just want to clarify what I mean by RAID 10 , this is sets of mirrored pairs that have been striped as against a RAID 0 which has been mirrored. I've just had a disk failure in the data array for one of my dev servers, it's an eight disk raid 8, no real worries, replace disk and off we go - but no - the HP engineers told me from the diagnostics ( done to ensure I got the right replacement under warranty ) that not only had a disk failed but I'd lost all the data...(read more)

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • maintaining consistency between program's files and add and remove program

    - by tast usar
    Would there be any problem if the program is listed in add and remove program and Program Files ? I have some application enlisted in "Add and Remove program" but files has been deleted. Also I have removed some program but it's file are still there and when i try to delete it manually, i get error "Access Denied ....". Is there a way to fix it? also a little help with quality standard ... i can't post question.

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • Simulink: type consistency errors

    - by stanigator
    Using this Simulink model file as a reference, I'm trying to figure out the two following errors: I have no idea what has gone wrong with the data type consistency/conversion problems. Do you know what the error messages mean exactly in the context of a model? It would be great to get an interpretation of the problem to solve it. Thanks in advance.

    Read the article

  • Consistency vs Design Guidelines

    - by Adrian Faciu
    Lets say that you get involved in the development of a large project that is already in development for a long period ( more than one year ). The projects follows some of the current design guidelines, but also has a few different, that are currently discouraged ( mostly at naming guidelines ). Supposing that you can't/aren't allowed to change the whole project: What should be more important, consistency, follow the existing ones and defy current guidelines or the usage of the guidelines, creating differences between modules of the same project ? Thanks.

    Read the article

  • Java Thread - Memory consistency errors

    - by Yatendra Goel
    I was reading a Sun's tutorial on Concurrency. But I couldn't understand exactly what memory consistency errors are? I googled about that but didn't find any helpful tutorial or article about that. I know that this question is a subjective one, so you can provide me links to articles on the above topic. It would be great if you explain it with a simple example.

    Read the article

  • How to analyze 'dbcc memorystatus' result in SQL Server 2008

    - by envykok
    Currently i am facing a sql memory pressure issue. i have run 'dbcc memorystatus', here is part of my result: Memory Manager KB VM Reserved 23617160 VM Committed 14818444 Locked Pages Allocated 0 Reserved Memory 1024 Reserved Memory In Use 0 Memory node Id = 0 KB VM Reserved 23613512 VM Committed 14814908 Locked Pages Allocated 0 MultiPage Allocator 387400 SinglePage Allocator 3265000 MEMORYCLERK_SQLBUFFERPOOL (node 0) KB VM Reserved 16809984 VM Committed 14184208 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 0 MultiPage Allocator 408 MEMORYCLERK_SQLCLR (node 0) KB VM Reserved 6311612 VM Committed 141616 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 1456 MultiPage Allocator 20144 CACHESTORE_SQLCP (node 0) KB VM Reserved 0 VM Committed 0 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 3101784 MultiPage Allocator 300328 Buffer Pool Value Committed 1742946 Target 1742946 Database 1333883 Dirty 940 In IO 1 Latched 18 Free 89 Stolen 408974 Reserved 2080 Visible 1742946 Stolen Potential 1579938 Limiting Factor 13 Last OOM Factor 0 Page Life Expectancy 5463 Process/System Counts Value Available Physical Memory 258572288 Available Virtual Memory 8771398631424 Available Paging File 16030617600 Working Set 15225597952 Percent of Committed Memory in WS 100 Page Faults 305556823 System physical memory high 1 System physical memory low 0 Process physical memory low 0 Process virtual memory low 0 Procedure Cache Value TotalProcs 11382 TotalPages 430160 InUsePages 28 Can you lead me to analyze this result ? Is it a lot execute plan have been cached causing the memory issue or other reasons?

    Read the article

  • consistency of Trigger Procedure (before row trigger) Postgresql

    - by elgcom
    Using Postgresql. I try to use TRIGGER procedure to make some consistency check on INSERT. The question is ...... whether "BEFORE INSERT FOR EACH ROW" can make sure each row to insert "checked" and "inserted" one after another? do I need extra lock on table to survive from concurrent insert? check for new row1 - insert row1 - check for new row2 - insert row2 -- -- -- unexpired product name is unique. CREATE TABLE product ( "name" VARCHAR(100) NOT NULL, "expired" BOOLEAN NOT NULL ); CREATE OR REPLACE FUNCTION check_consistency() RETURNS TRIGGER AS $$ BEGIN IF EXISTS (SELECT * FROM product WHERE name=NEW.name AND expired='false') THEN RAISE EXCEPTION 'duplicated!!!'; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER trigger_check_consistency BEFORE INSERT ON product FOR EACH ROW EXECUTE PROCEDURE check_consistency(); -- INSERT INTO product VALUES("prod1", true); INSERT INTO product VALUES("prod1", false); INSERT INTO product VALUES("prod1", false); // exception! this is OK name | expired ============== p1 | true p1 | true p1 | false This is not OK name | expired ============== p1 | true p1 | false p1 | false or maybe I should ask, how can I use Trigger to implement "Primary" or "Unique" constraint-like SQL.

    Read the article

  • Differences in documentation for sys.dm_exec_requests

    - by AaronBertrand
    I've already complained about this on Connect ( see #641790 ), but I just wanted to point out that if you're trying to make sense of the sys.dm_exec_requests document and what it lists as the commands supported by the percent_complete column, you should check which version of the documentation you're reading. I noticed the following discrepancies. I can't explain why certain operations are missing, except that the Denali topic was generated from the 2008 topic (or maybe from the 2008 R2 topic before...(read more)

    Read the article

  • Differences in documentation for sys.dm_exec_requests

    - by AaronBertrand
    I've already complained about this on Connect ( see #641790 ), but I just wanted to point out that if you're trying to make sense of the sys.dm_exec_requests document and what it lists as the commands supported by the percent_complete column, you should check which version of the documentation you're reading. I noticed the following discrepancies. I can't explain why certain operations are missing, except that the Denali topic was generated from the 2008 topic (or maybe from the 2008 R2 topic before...(read more)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >