Search Results

Search found 3111 results on 125 pages for 'labeled statements'.

Page 19/125 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • System in low graphics, deleted linux, grub rescue, can't access windows

    - by First timer
    So I'm pretty new to Ubuntu but I managed to install it with no big problems on both my desktop and netbook. When I installed it on my brother's netbook everything went horribly wrong and now I fear the system is close to beyond repair. The problem was first that it said it did not have any space left (seemed ridiculous since it had a lot). Then Ubuntu began booting into a "System is running in low graphics mode error" which I then tried to fix, using all the tips I could find in here but nothing helped. I think the graphics error and lack of space might have been related but I can't be sure. Finally I gave up repairing Ubuntu and went for a reinstall. Shouldn't have done that! I read that I should simply open Ubuntu through a live usb and choose GParted to delete the Linux partitions so I did and rebooted accordingly. Next, I was to install Ubuntu but now I am only given the option to wipe the whole disk for Ubuntu, not install along with windows 7. If I access GParted I can still see the ntfs partitions that hold windows 7 (there are 2: one labeled RECOVERY and another labeled OS and boot) so why can't I access them? Btw. the OS and boot has a little red mark with a warning that 1 cluster is referenced to multiple times, don't know what that means. If I boot without the live usb I am sent directly into a grub rescue "black screen of the computer will follow no orders". Please, I know that the easiest might be to simply wipe the whole thing clean but there are important files and programs on windows 7. Is there a way to just access windows? It is a dell inspiron 1018 mini netbook, so I have no cd input and no windows 7 installation cd.

    Read the article

  • How to read reduce/shift conflicts in LR(1) DFA?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.293 in the 2nd edition) and I moved forward from my last question: How to get lookahead symbol when constructing LR(1) NFA for parser? Now I have such "problem" (maybe not a problem, but rather need of confirmation from some more knowledgeable people). The authors present state in LR(0) which has reduce/shift conflict. Then they build DFA for LR(1) for the same grammar. And now they say it does not have a conflict (lookaheads at the end): S -> E . eof E -> E . - T eof E -> E . - T - and there is an edge from this state labeled - but no labeled eof. Authors says, that on eof there will be reduce, on - there will be shift. However eof is for shift as well (as lookahead). So my personal understanding of LR(1) DFA is this -- you can drop lookaheads for shifts, because they serve no purpose now -- shifts rely on input, not on lookaheads -- and after that, remove duplicates. S -> E . eof E -> E . - T So the lookahead for reduce serves as input really, because at this stage (all required input is read) it is really incoming symbol right now. For shifts, the input symbols are on the edges. So my question is this -- am I actually right about dropping lookaheads for shifts (after fully constructing DFA)?

    Read the article

  • Understanding exim configuration files

    - by bobobobo
    So, I want to understand what's going on with this Exim configuration directory. In /etc/exim4, there's: * exim4.conf.template * update-exim4.conf.conf * conf.d The conf.d has a mess of directories and files, and inside each are a bunch of if statements which I find really different. For example: maildir_home: debug_print = "T: maildir_home for $local_part@$domain" driver = appendfile .ifdef MAILDIR_HOME_MAILDIR_LOCATION directory = MAILDIR_HOME_MAILDIR_LOCATION .else directory = $home/Maildir .endif .ifdef MAILDIR_HOME_CREATE_DIRECTORY create_directory .endif .ifdef MAILDIR_HOME_CREATE_FILE My question is, where do the CAPS VARIABLES get defined how can I change them why are there so many if statements in these configuration files?

    Read the article

  • Is there a log file analyzer for log4j files?

    - by Juha Syrjälä
    I am looking for some kind of analyzer tool for log files generated by log4j files. I am looking something more advanced than grep? What are you using for log file analysis? I am looking for following kinds of features: The tool should tell me how many time a given log statement or a stack trace has occurred, preferably with support for some kinds of patterns (eg. number of log statements matching 'User [a-z]* logged in'). Breakdowns by log level (how many INFO, DEBUG lines) and by class that initiated the log message would be nice. Breakdown by date (how many log statements in given time period) What log lines occur commonly together? Support for several files since I am using log rolling Hot spot analysis: find if there is a some time period when there is unusually high number of log statements Either command-line or GUI are fine Open Source is preferred but I am also interested in commercial offerings My log4j configuration uses org.apache.log4j.PatternLayout with pattern %d %p %c - %m%n but that could be adapted for analyzer tool.

    Read the article

  • Dynamic SQL to generate column names?

    - by Ben McCormack
    I have a query where I'm trying pivot row values into column names and currently I'm using SUM(Case...) As 'ColumnName' statements, like so: SELECT SKU1, SUM(Case When Sku2=157 Then Quantity Else 0 End) As '157', SUM(Case When Sku2=158 Then Quantity Else 0 End) As '158', SUM(Case When Sku2=167 Then Quantity Else 0 End) As '167' FROM OrderDetailDeliveryReview Group By OrderShipToID, DeliveryDate, SKU1 The above query works great and gives me exactly what I need. However, I'm writing out the SUM(Case... statements by hand based on the results of the following query: Select Distinct Sku2 From OrderDetailDeliveryReview Is there a way, using T-SQL inside a stored procedure, that I can dynamically generate the SUM(Case... statements from the Select Distinct Sku2 From OrderDetailDeliveryReview query and then execute the resulting SQL code?

    Read the article

  • Python if statement efficiency

    - by Dennis
    A friend (fellow low skill level recreational python scripter) asked me to look over some code. I noticed that he had 7 separate statements that basically said. if ( a and b and c): do something the statements a,b,c all tested their equality or lack of to set values. As I looked at it I found that because of the nature of the tests, I could re-write the whole logic block into 2 branches that never went more than 3 deep and rarely got past the first level (making the most rare occurrence test out first). if a: if b: if c: else: if c: else: if b: if c: else: if c: To me, logically it seems like it should be faster if you are making less, simpler tests that fail faster and move on. My real questions are 1) When I say if and else, should the if be true, does the else get completely ignored? 2) In theory would if (a and b and c) take as much time as the three separate if statements would?

    Read the article

  • Batch View updates

    - by Terry
    I want to load a bunch of view definitions into SQL Server 2005 & 2008. I am using IF/ELSE logic to dynamically building Create or Alter statements that I then EXEC. This works fine. However, unless I get the order of the statements correct, I get errors if a view is dependent on another view that would be created in a later statement. Is there a way to turn of the valiation of the SQL statements until after they have all been entered? It seems like the worst of both worlds. SQL Server does late binding and so it won't propogate changes to tables and views, but you can't create a view without all the pieces being in place.

    Read the article

  • Apache Derby supports natively script delimiters?

    - by Steel Plume
    Hello, I know that I could separate all statements by pre-cutting before their execution, but I have a case in which I would like to insert a series of statements in one execution, currently I receive the following error: Caused by: java.sql.SQLException: Syntax Error: Encountered ";" at line 2, column 33. at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source) ... 17 more Caused by: ERROR 42X01: Syntax Error: Encountered ";" at line 2, column 33. Also do you know also if Derby supports conditional statements like that in pgsql case when else? Thanks

    Read the article

  • Oracle 10g - Best way to escape single quotes

    - by satynos
    I have to generate some update statements based off a table in our database. I created the following script which is generating the update statements I need. But when I try to run those scripts I am getting errors pertaining to unescaped single quotes in the content and &B, &T characters which have special meaning in oracle. I took care of the &B and &T problem by setting SET DEFINE OFF. Whats the best way to escape single quotes within the content? DECLARE CURSOR C1 IS SELECT * FROM EMPLOYEES; BEGIN FOR I IN C1 LOOP DBMS_OUTPUT.PUT_LINE('UPDATE EMPLOYEES SET FIRST_NAME= ''' || I.FIRST_NAME|| ''', LAST_NAME = ''' || I.LAST_NAME ''', DOB = ''' || I.DOB|| ''' WHERE EMPLOYEE_ID = ''' || I.EMPLOYEE_ID || ''';'); END LOOP; END; Here if the first_name or last_name contains single quotes then the generated update statements break. Whats the best way to escape those single quotes within the first_name and last_name?

    Read the article

  • Is there a Oracle equivalent of mysqldump

    - by Rakhitha
    Is there a way to dump the content of a oracle table in to a file formated as INSERT statements. I can't use oradump as it is on GPL. I will be running it from a perl CGI script. I am looking for something to dump data directly from oracle server using a single command. Running a select and creating insert statements using perl is too slow as there will be lot of data. I know I can probably do this using spool command and a plsql block at server side. But is there a built in command to do this instead of formating the INSERT statements myself?

    Read the article

  • VS2012 equivalent of Eclipse's default Ctrl-Shift-O?

    - by x3chaos
    I'm used to Java (in Eclipse), which has its import statements, but I'm writing a DLL in Visual C# (in Visual Studio 2012), which has its using statements. I'm used to Eclipse's default keyboard shortcut Ctrl-Shift-O, which updates the list of import statements in the Java perspective, deleting unused imports and adding necessary imports found on the build path. Is there an equivalent operation in VS2012 with VC#? I've just been selecting the word, opening the Office-style popup, and added the "using" statement that way, but it conflicts with my workflow (read: I'm lazy and I like having my shortcuts).

    Read the article

  • Tool used to retrieve code metrics in xUnit Test Patterns?

    - by leeand00
    I'm reading xUnit Test Patterns by Gerard Meszaros. On one of the pages he refers to some software metrics: While the need to wrap lines to keep them at 65 characters makes this code look even longer than it really is, it is still unnecessarily long. It contains 25 executable statements including initialized declarations, 6 lines of control statements, 4 in-line comments, and 2 lines to declare the test method—giving a total of 37 lines of unwrapped source code. Short of counting the statements to find these metrics, does anybody have any idea if he used a particular tool to calculate the metrics? (If you have any suggestions for tools that will count similar metrics, I'm looking for one that works on Java, Javascript and C++) Thanks!

    Read the article

  • Is there a way to rollback and exit a psql script on error?

    - by metanaito
    I have a psql script that looks like this: -- first set of statements begin sql statement; sql statement; sql statement; exception when others then rollback write some output (here I want to exit the entire script and not continue on to the next set of statements) end / -- another set of statements begin sql statement; sql statement; sql statement; exception when others then rollback write some output (here I want to exit the entire script and not continue) end / ... and so on Is it possible to exit the script and stop processing the rest of the script?

    Read the article

  • How to retrieve MYSQL records as an INSERT statement.

    - by Aglystas
    I'm trying come up with the best method of synchronizing particular rows of 2 different database tables. So, for example there's 2 product tables in different databases as such... Origin Database product{ merchant_id, product_id, ... additional fields } Destination Database product{ merchant_id product_id ... additional fields } So, the database schema is the same for both. However I'm looking to select records with a particular merchant_id, remove all records from the destination table that have that merchant_id and replace those records with records from the origin database of the same merchant_id. My first thought was using mysqldump, parsing out the create table statements, and only running the Insert Statements. Seems like a pain though. So I was wondering if there is a better technique to do this. I would think mysql has some method of creating INSERT statements as output from a SELECT statement, so you can define how to insert specific record information into a new db. Any help would be appreciated, thank you much.

    Read the article

  • insert into several inheritance tables with OUTPUT - sql servr 2005

    - by csetzkorn
    Hi, I have a bunch of items – for simplicity reasons – a flat table with unique names seeded via bulk insert: create table #items ( ItemName NVARCHAR(255) ) The database has this structure: create table Statements ( Id INT IDENTITY NOT NULL, Version INT not null, FurtherDetails varchar(max) null, ProposalDateTime DATETIME null, UpdateDateTime DATETIME null, ProposerFk INT null, UpdaterFk INT null, primary key (Id) ) create table Item ( StatementFk INT not null, ItemName NVARCHAR(255) null, primary key (StatementFk) ) Here Item is a child of Statement (inheritance). I would like to insert items in #items using a set based approach (avoiding triggers and loops). Can this be achieved with OUTPUT in my scenario. A ‘loop based’ approach is just too slow where I use something like this: insert into Statements (Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk) VALUES (1, null, getdate(), getdate(), @user_id, @user_id) etc. This is a start for the OUTPUT based approach – but I am not sure whether this would work in my case as ItemName is only inserted into Item: insert into Statements ( Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk ) output inserted.Id ... ??? Thanks. Best wishes, Christian

    Read the article

  • PHP: prepared statement, IF statement help needed

    - by JGreig
    I have the following code: $sql = "SELECT name, address, city FROM tableA, tableB WHERE tableA.id = tableB.id"; if (isset($price) ) { $sql = $sql . ' AND price = :price '; } if (isset($sqft) ) { $sql = $sql . ' AND sqft >= :sqft '; } if (isset($bedrooms) ) { $sql = $sql . ' AND bedrooms >= :bedrooms '; } $stmt = $dbh->prepare($sql); if (isset($price) ) { $stmt->bindParam(':price', $price); } if (isset($sqft) ) { $stmt->bindParam(':price', $price); } if (isset($bedrooms) ) { $stmt->bindParam(':bedrooms', $bedrooms); } $stmt->execute(); $result_set = $stmt->fetchAll(PDO::FETCH_ASSOC); What I notice is the redundant multiple IF statements I have. Question: is there any way to clean up my code so that I don't have these multiple IF statements for prepared statements?

    Read the article

  • mysql stored routine vs. mysql-alternative?

    - by user522962
    We are using a mysql database w/ about 150,000 records (names) total. Our searches on the 'names' field is done through an autocomplete function in php. We have the table indexed but still feel that the searching is a bit sluggish (a few full seconds vs. something like Google Finance w/ near-instant response). We came up w/ 2 possibilities, but wanted to get more insight: Can we create a bunch (many thousands or more) of stored procedures to speed up searches, or will creating that many stored procedures bog-down the db? Is there a faster alternative to mysql for "select" statements (speed on inserting & updating rows isn't too important so we can sacrifice that, if necessary). I've vaguely heard of BigTable & others that don't support JOIN statements....we need JOIN statements for some of our other queries we do. thx

    Read the article

  • NHibernate will insert but not update after move to host with shared server running mysql.

    - by Andy LifeBrixx
    Hi, I have a site running MVC and Nhibernate (not fluent) using standard session per request in an http module, runs fine locally (also with mysql) but after a move to a hosting provider no update statements are being issued. I can insert but not update, no exceptions are raised, I have the 'show_sql' option switched on which locally shows the update statements being issued but on the server no update statements are logged. I don't think NHProf is an option for me as I can only run asp.net apps on my shared server, are there any other methods of diagnosing NH issues like this ? Anyone had a similar issue ? Cheers, A

    Read the article

  • Generic SQL builder .NET

    - by Patrick
    I'm looking for a way to write an SQL statement in C# targeting different providers. A typical example of SQL statements differentiating is the LIMIT in PostgreSQL vs. TOP in MSSQL. Is the only way to solve SQL-syntax like the two above to write if-statements depending on which provider the user selects or using try catch statements as flow control (LIMIT didn't work, I'll try TOP instead)? I've seen the LINQ Take method, but I'm wondering if one can do this without LINQ? In other words, does C# have some generic SQL Provider class that I have failed to find that can be used?

    Read the article

  • Array or array element in C?

    - by user3646717
    I'm reading a book about C programming, and I'm not sure whether there is an error in the book or not. Its about arrays and has the following array example: Then it says: The following statements sets all the elements in row 2 of array to zero: for( column = 0; column <= 3; column++) a[ 2 ][ column ] = 0; The preceding for statement is equivalent to the assignment statements: a[ 2 ][ 0 ] = 0; a[ 2 ][ 1 ] = 0; a[ 2 ][ 2 ] = 0; a[ 2 ][ 3 ] = 0; Shouldn't it say "The following statements sets all the elements in row 1 to zero"?. Because if I say a[ 3 ] I am talking about the row 2, if I say a[ 2 ] I am talking about row 1 and if I say a[ 1 ] I am talking about row 0.

    Read the article

  • SQL03070: This statement is not recognized in this context

    - by prash
    Recently I have started working with VS2010 and Fx4. There have been various challenges. We also introduced a new Database Project in our solution. And found this error. The reason for this error is: the project system expects the stored procedure as a create statement only.  The additional statements to drop if existing are not necessary within the project system.  Project deployment takes care of detecting if the sproc already exists and if it needs to be updated. To resolve this error you can simply remove the additional statements other then your create SP, Function etc. OR Exclude the file from build. Right Click on your file in Solution Explorer, Click Properties > Build Action > Not in Build

    Read the article

  • SQL Developer: Why Do You Require Semicolons When Executing SQL in the Worksheet?

    - by thatjeffsmith
    There are many database tools out there that support Oracle database. Oracle SQL Developer just happens to be the one that is produced and shipped by the same folks that bring you the database product. Several other 3rd party tools out there allow you to have a collection of SQL statements in their editor and execute them without requiring a statement delimiter (usually a semicolon.) Let’s look at a quick example: select * from scott.emp select * from hr.employees delete from HR_COPY.BEER where HR_COPY.BEER.STATE like '%West Virginia% In some tools, you can simply place your cursor on say the 2nd statement and ask to execute that statement. The tool assumes that the blank line between it and the next statement, a DELETE, serves as a statement delimiter. This is not bad in and of itself. However, it is very important to understand how your tools work. If you were to try the same trick by running the delete statement, it would empty my entire BEER table instead of just trimming out the breweries from my home state. SQL Developer only executes what you ask it to execute You can paste this same code into SQL Developer and run it without problems and without having to add semicolons to your statements. Highlight what you want executed, and hit Ctrl-Enter If you don’t highlight the text, here’s what you’ll see: See the statement at the cursor vs what SQL Developer actually executed? The parser looks for a query and keeps going until the statement is terminated with a semicolon – UNLESS it’s highlighted, then it assumes you only want to execute what is highlighted. In both cases you are being explicit with what is being sent to the database. Again, there’s not necessarily a ‘right’ or ‘wrong’ debate here. What you need to be aware of is the differences and to learn new workflows if you are moving from other database tools to Oracle SQL Developer. I say, when in doubt, back away from the tool, especially if you’re in production. Oh, and to answer the original question… Because we’re trying to emulate SQL*Plus behavior. You end statements in SQL*Plus with delimiters, and the default delimiter is a semicolon.

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >