Search Results

Search found 9365 results on 375 pages for '13 04'.

Page 364/375 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Errors when installing Open Office

    - by user109036
    I followed the first set of instructions on this page to install Open Office: How to install Open Office? However, the last step which says to change the CHMOD of a folder, I got an error saying that the directory does not exist. Open Office now appears in my Ubuntu start menu, but clicking on it does nothing. I tried a reboot. Below is what I could copy from my terminal. I am running the latest Ubuntu. I have not uninstalled Libreoffice as suggested somewhere. The reason is that in the Ubuntu software centre, Libre office appears to be made up of several components and I don't know which ones to remove (or all maybe?). They are Libreoffice Draw, Math, Writer, Calc. After this operation, 480 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-lib all 6b24-1.11.5-0ubuntu1~12.10.1 [6,135 kB] Get:2 http://ppa.launchpad.net/upubuntu-com/office/ubuntu/ quantal/main openoffice amd64 3.4~oneiric [321 MB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ca-certificates-java all 20120721 [13.2 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main tzdata-java all 2012e-0ubuntu2 [140 kB] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main java-common all 0.43ubuntu3 [61.7 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-headless amd64 6b24-1.11.5-0ubuntu1~12.10.1 [25.4 MB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgif4 amd64 4.1.6-9.1ubuntu1 [31.3 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre amd64 6b24-1.11.5-0ubuntu1~12.10.1 [234 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java all 0.30.4-0ubuntu4 [29.8 kB] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java-jni amd64 0.30.4-0ubuntu4 [31.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xorg-sgml-doctools all 1:1.10-1 [12.0 kB] Get:12 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-core-dev all 7.0.23-1 [744 kB] Get:13 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libice-dev amd64 2:1.0.8-2 [57.6 kB] Get:14 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0 amd64 0.3-3 [3,258 B] Get:15 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0-dev amd64 0.3-3 [2,866 B] Get:16 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libsm-dev amd64 2:1.2.1-2 [19.9 kB] Get:17 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau-dev amd64 1:1.0.7-1 [10.2 kB] Get:18 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp-dev amd64 1:1.1.1-1 [26.9 kB] Get:19 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-input-dev all 2.2-1 [133 kB] Get:20 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-kb-dev all 1.0.6-2 [269 kB] Get:21 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xtrans-dev all 1.2.7-1 [84.3 kB] Get:22 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1-dev amd64 1.8.1-1ubuntu1 [82.6 kB] Get:23 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-dev amd64 2:1.5.0-1 [912 kB] Get:24 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-doc all 2:1.5.0-1 [2,460 kB] Get:25 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxt-dev amd64 1:1.1.3-1 [492 kB] Get:26 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ttf-dejavu-extra all 2.33-2ubuntu1 [3,420 kB] Get:27 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-cacao amd64 6b24-1.11.5-0ubuntu1~12.10.1 [417 kB] Get:28 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-jamvm amd64 6b24-1.11.5-0ubuntu1~12.10.1 [581 kB] Get:29 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx-common all 1.3-1ubuntu1.1 [617 kB] Get:30 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx amd64 1.3-1ubuntu1.1 [16.2 kB] Get:31 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jdk amd64 6b24-1.11.5-0ubuntu1~12.10.1 [11.1 MB] Fetched 374 MB in 9min 18s (671 kB/s) Extract templates from packages: 100% Selecting previously unselected package openjdk-6-jre-lib. (Reading database ... 143191 files and directories currently installed.) Unpacking openjdk-6-jre-lib (from .../openjdk-6-jre-lib_6b24-1.11.5-0ubuntu1~12.10.1_all.deb) ... Selecting previously unselected package ca-certificates-java. Unpacking ca-certificates-java (from .../ca-certificates-java_20120721_all.deb) ... Selecting previously unselected package tzdata-java. Unpacking tzdata-java (from .../tzdata-java_2012e-0ubuntu2_all.deb) ... Selecting previously unselected package java-common. Unpacking java-common (from .../java-common_0.43ubuntu3_all.deb) ... Selecting previously unselected package openjdk-6-jre-headless:amd64. Unpacking openjdk-6-jre-headless:amd64 (from .../openjdk-6-jre-headless_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libgif4:amd64. Unpacking libgif4:amd64 (from .../libgif4_4.1.6-9.1ubuntu1_amd64.deb) ... Selecting previously unselected package openjdk-6-jre:amd64. Unpacking openjdk-6-jre:amd64 (from .../openjdk-6-jre_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libatk-wrapper-java. Unpacking libatk-wrapper-java (from .../libatk-wrapper-java_0.30.4-0ubuntu4_all.deb) ... Selecting previously unselected package libatk-wrapper-java-jni:amd64. Unpacking libatk-wrapper-java-jni:amd64 (from .../libatk-wrapper-java-jni_0.30.4-0ubuntu4_amd64.deb) ... Selecting previously unselected package xorg-sgml-doctools. Unpacking xorg-sgml-doctools (from .../xorg-sgml-doctools_1%3a1.10-1_all.deb) ... Selecting previously unselected package x11proto-core-dev. Unpacking x11proto-core-dev (from .../x11proto-core-dev_7.0.23-1_all.deb) ... Selecting previously unselected package libice-dev:amd64. Unpacking libice-dev:amd64 (from .../libice-dev_2%3a1.0.8-2_amd64.deb) ... Selecting previously unselected package libpthread-stubs0:amd64. Unpacking libpthread-stubs0:amd64 (from .../libpthread-stubs0_0.3-3_amd64.deb) ... Selecting previously unselected package libpthread-stubs0-dev:amd64. Unpacking libpthread-stubs0-dev:amd64 (from .../libpthread-stubs0-dev_0.3-3_amd64.deb) ... Selecting previously unselected package libsm-dev:amd64. Unpacking libsm-dev:amd64 (from .../libsm-dev_2%3a1.2.1-2_amd64.deb) ... Selecting previously unselected package libxau-dev:amd64. Unpacking libxau-dev:amd64 (from .../libxau-dev_1%3a1.0.7-1_amd64.deb) ... Selecting previously unselected package libxdmcp-dev:amd64. Unpacking libxdmcp-dev:amd64 (from .../libxdmcp-dev_1%3a1.1.1-1_amd64.deb) ... Selecting previously unselected package x11proto-input-dev. Unpacking x11proto-input-dev (from .../x11proto-input-dev_2.2-1_all.deb) ... Selecting previously unselected package x11proto-kb-dev. Unpacking x11proto-kb-dev (from .../x11proto-kb-dev_1.0.6-2_all.deb) ... Selecting previously unselected package xtrans-dev. Unpacking xtrans-dev (from .../xtrans-dev_1.2.7-1_all.deb) ... Selecting previously unselected package libxcb1-dev:amd64. Unpacking libxcb1-dev:amd64 (from .../libxcb1-dev_1.8.1-1ubuntu1_amd64.deb) ... Selecting previously unselected package libx11-dev:amd64. Unpacking libx11-dev:amd64 (from .../libx11-dev_2%3a1.5.0-1_amd64.deb) ... Selecting previously unselected package libx11-doc. Unpacking libx11-doc (from .../libx11-doc_2%3a1.5.0-1_all.deb) ... Selecting previously unselected package libxt-dev:amd64. Unpacking libxt-dev:amd64 (from .../libxt-dev_1%3a1.1.3-1_amd64.deb) ... Selecting previously unselected package ttf-dejavu-extra. Unpacking ttf-dejavu-extra (from .../ttf-dejavu-extra_2.33-2ubuntu1_all.deb) ... Selecting previously unselected package icedtea-6-jre-cacao:amd64. Unpacking icedtea-6-jre-cacao:amd64 (from .../icedtea-6-jre-cacao_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-6-jre-jamvm:amd64. Unpacking icedtea-6-jre-jamvm:amd64 (from .../icedtea-6-jre-jamvm_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-netx-common. Unpacking icedtea-netx-common (from .../icedtea-netx-common_1.3-1ubuntu1.1_all.deb) ... Selecting previously unselected package icedtea-netx:amd64. Unpacking icedtea-netx:amd64 (from .../icedtea-netx_1.3-1ubuntu1.1_amd64.deb) ... Selecting previously unselected package openjdk-6-jdk:amd64. Unpacking openjdk-6-jdk:amd64 (from .../openjdk-6-jdk_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package openoffice. Unpacking openoffice (from .../openoffice_3.4~oneiric_amd64.deb) ... Processing triggers for doc-base ... Processing 2 added doc-base files... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for fontconfig ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Setting up tzdata-java (2012e-0ubuntu2) ... Setting up java-common (0.43ubuntu3) ... Setting up libgif4:amd64 (4.1.6-9.1ubuntu1) ... Setting up xorg-sgml-doctools (1:1.10-1) ... Setting up x11proto-core-dev (7.0.23-1) ... Setting up libice-dev:amd64 (2:1.0.8-2) ... Setting up libpthread-stubs0:amd64 (0.3-3) ... Setting up libpthread-stubs0-dev:amd64 (0.3-3) ... Setting up libsm-dev:amd64 (2:1.2.1-2) ... Setting up libxau-dev:amd64 (1:1.0.7-1) ... Setting up libxdmcp-dev:amd64 (1:1.1.1-1) ... Setting up x11proto-input-dev (2.2-1) ... Setting up x11proto-kb-dev (1.0.6-2) ... Setting up xtrans-dev (1.2.7-1) ... Setting up libxcb1-dev:amd64 (1.8.1-1ubuntu1) ... Setting up libx11-dev:amd64 (2:1.5.0-1) ... Setting up libx11-doc (2:1.5.0-1) ... Setting up libxt-dev:amd64 (1:1.1.3-1) ... Setting up ttf-dejavu-extra (2.33-2ubuntu1) ... Setting up icedtea-netx-common (1.3-1ubuntu1.1) ... Setting up openjdk-6-jre-lib (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up openjdk-6-jre-headless:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/orbd to provide /usr/bin/orbd (orbd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/servertool to provide /usr/bin/servertool (servertool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode Setting up ca-certificates-java (20120721) ... Adding debian:Deutsche_Telekom_Root_CA_2.pem Adding debian:Comodo_Trusted_Services_root.pem Adding debian:Certum_Trusted_Network_CA.pem Adding debian:thawte_Primary_Root_CA_-_G2.pem Adding debian:UTN_USERFirst_Hardware_Root_CA.pem Adding debian:AddTrust_Low-Value_Services_Root.pem Adding debian:Microsec_e-Szigno_Root_CA.pem Adding debian:SwissSign_Silver_CA_-_G2.pem Adding debian:ComSign_Secured_CA.pem Adding debian:Buypass_Class_2_CA_1.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certum_Root_CA.pem Adding debian:AddTrust_External_Root.pem Adding debian:Chambers_of_Commerce_Root_-_2008.pem Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Visa_eCommerce_Root.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_3.pem Adding debian:AC_Raíz_Certicámara_S.A..pem Adding debian:NetLock_Arany_=Class_Gold=_Fotanúsítvány.pem Adding debian:Taiwan_GRCA.pem Adding debian:Camerfirma_Chambers_of_Commerce_Root.pem Adding debian:Juur-SK.pem Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem Adding debian:XRamp_Global_CA_Root.pem Adding debian:Security_Communication_RootCA2.pem Adding debian:AddTrust_Qualified_Certificates_Root.pem Adding debian:NetLock_Qualified_=Class_QA=_Root.pem Adding debian:TC_TrustCenter_Class_2_CA_II.pem Adding debian:DST_ACES_CA_X6.pem Adding debian:thawte_Primary_Root_CA.pem Adding debian:thawte_Primary_Root_CA_-_G3.pem Adding debian:GeoTrust_Universal_CA_2.pem Adding debian:ACEDICOM_Root.pem Adding debian:Security_Communication_EV_RootCA1.pem Adding debian:America_Online_Root_Certification_Authority_2.pem Adding debian:TC_TrustCenter_Universal_CA_I.pem Adding debian:SwissSign_Platinum_CA_-_G2.pem Adding debian:Global_Chambersign_Root_-_2008.pem Adding debian:SecureSign_RootCA11.pem Adding debian:GeoTrust_Global_CA_2.pem Adding debian:Buypass_Class_3_CA_1.pem Adding debian:Baltimore_CyberTrust_Root.pem Adding debian:UbuntuOne-Go_Daddy_Class_2_CA.pem Adding debian:Equifax_Secure_eBusiness_CA_1.pem Adding debian:SwissSign_Gold_CA_-_G2.pem Adding debian:AffirmTrust_Premium_ECC.pem Adding debian:TC_TrustCenter_Universal_CA_III.pem Adding debian:ca.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.pem Adding debian:NetLock_Express_=Class_C=_Root.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem Adding debian:Firmaprofesional_Root_CA.pem Adding debian:Comodo_Secure_Services_root.pem Adding debian:cacert.org.pem Adding debian:GeoTrust_Primary_Certification_Authority.pem Adding debian:RSA_Security_2048_v3.pem Adding debian:Staat_der_Nederlanden_Root_CA.pem Adding debian:Cybertrust_Global_Root.pem Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem Adding debian:TDC_OCES_Root_CA.pem Adding debian:A-Trust-nQual-03.pem Adding debian:Equifax_Secure_CA.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_1.pem Adding debian:GeoTrust_Global_CA.pem Adding debian:Starfield_Class_2_CA.pem Adding debian:ApplicationCA_-_Japanese_Government.pem Adding debian:Swisscom_Root_CA_1.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Camerfirma_Global_Chambersign_Root.pem Adding debian:QuoVadis_Root_CA_3.pem Adding debian:QuoVadis_Root_CA.pem Adding debian:Comodo_AAA_Services_root.pem Adding debian:ComSign_CA.pem Adding debian:AddTrust_Public_Services_Root.pem Adding debian:DigiCert_Assured_ID_Root_CA.pem Adding debian:UTN_DATACorp_SGC_Root_CA.pem Adding debian:CA_Disig.pem Adding debian:E-Guven_Kok_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:GlobalSign_Root_CA_-_R3.pem Adding debian:QuoVadis_Root_CA_2.pem Adding debian:Entrust_Root_Certification_Authority.pem Adding debian:GTE_CyberTrust_Global_Root.pem Adding debian:ValiCert_Class_1_VA.pem Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G2.pem Adding debian:spi-ca-2003.pem Adding debian:America_Online_Root_Certification_Authority_1.pem Adding debian:AffirmTrust_Premium.pem Adding debian:Sonera_Class_1_Root_CA.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certplus_Class_2_Primary_CA.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_2.pem Adding debian:Network_Solutions_Certificate_Authority.pem Adding debian:Go_Daddy_Class_2_CA.pem Adding debian:StartCom_Certification_Authority.pem Adding debian:Hongkong_Post_Root_CA_1.pem Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem Adding debian:Thawte_Premium_Server_CA.pem Adding debian:EBG_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_1.pem Adding debian:NetLock_Business_=Class_B=_Root.pem Adding debian:Microsec_e-Szigno_Root_CA_2009.pem Adding debian:DigiCert_Global_Root_CA.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G4.pem Adding debian:IGC_A.pem Adding debian:TWCA_Root_Certification_Authority.pem Adding debian:S-TRUST_Authentication_and_Encryption_Root_CA_2005_PN.pem Adding debian:VeriSign_Universal_Root_Certification_Authority.pem Adding debian:DST_Root_CA_X3.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority.pem Adding debian:Root_CA_Generalitat_Valenciana.pem Adding debian:UTN_USERFirst_Email_Root_CA.pem Adding debian:ssl-cert-snakeoil.pem Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G3.pem Adding debian:Certinomis_-_Autorité_Racine.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority.pem Adding debian:TDC_Internet_Root_CA.pem Adding debian:UbuntuOne-ValiCert_Class_2_VA.pem Adding debian:AffirmTrust_Commercial.pem Adding debian:spi-cacert-2008.pem Adding debian:Izenpe.com.pem Adding debian:EC-ACC.pem Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem Adding debian:COMODO_ECC_Certification_Authority.pem Adding debian:CNNIC_ROOT.pem Adding debian:NetLock_Notary_=Class_A=_Root.pem Adding debian:Equifax_Secure_eBusiness_CA_2.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Secure_Global_CA.pem Adding debian:UbuntuOne-Go_Daddy_CA.pem Adding debian:GeoTrust_Universal_CA.pem Adding debian:Wells_Fargo_Root_CA.pem Adding debian:Thawte_Server_CA.pem Adding debian:WellsSecure_Public_Root_Certificate_Authority.pem Adding debian:TC_TrustCenter_Class_3_CA_II.pem Adding debian:COMODO_Certification_Authority.pem Adding debian:Equifax_Secure_Global_eBusiness_CA.pem Adding debian:Security_Communication_Root_CA.pem Adding debian:GlobalSign_Root_CA_-_R2.pem Adding debian:TÜBITAK_UEKAE_Kök_Sertifika_Hizmet_Saglayicisi_-_Sürüm_3.pem Adding debian:Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.pem Adding debian:certSIGN_ROOT_CA.pem Adding debian:RSA_Root_Certificate_1.pem Adding debian:ePKI_Root_Certification_Authority.pem Adding debian:Entrust.net_Secure_Server_CA.pem Adding debian:OISTE_WISeKey_Global_Root_GA_CA.pem Adding debian:Sonera_Class_2_Root_CA.pem Adding debian:Certigna.pem Adding debian:AffirmTrust_Networking.pem Adding debian:ValiCert_Class_2_VA.pem Adding debian:GlobalSign_Root_CA.pem Adding debian:Staat_der_Nederlanden_Root_CA_-_G2.pem Adding debian:SecureTrust_CA.pem done. Setting up openjdk-6-jre:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/policytool to provide /usr/bin/policytool (policytool) in auto mode Setting up libatk-wrapper-java (0.30.4-0ubuntu4) ... Setting up icedtea-6-jre-cacao:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-6-jre-jamvm:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-netx:amd64 (1.3-1ubuntu1.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode Setting up openjdk-6-jdk:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/appletviewer to provide /usr/bin/appletviewer (appletviewer) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/extcheck to provide /usr/bin/extcheck (extcheck) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/idlj to provide /usr/bin/idlj (idlj) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jar to provide /usr/bin/jar (jar) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jarsigner to provide /usr/bin/jarsigner (jarsigner) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javadoc to provide /usr/bin/javadoc (javadoc) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javah to provide /usr/bin/javah (javah) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javap to provide /usr/bin/javap (javap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jdb to provide /usr/bin/jdb (jdb) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jhat to provide /usr/bin/jhat (jhat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jinfo to provide /usr/bin/jinfo (jinfo) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jmap to provide /usr/bin/jmap (jmap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jps to provide /usr/bin/jps (jps) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jrunscript to provide /usr/bin/jrunscript (jrunscript) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jsadebugd to provide /usr/bin/jsadebugd (jsadebugd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstack to provide /usr/bin/jstack (jstack) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstat to provide /usr/bin/jstat (jstat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstatd to provide /usr/bin/jstatd (jstatd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/native2ascii to provide /usr/bin/native2ascii (native2ascii) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/rmic to provide /usr/bin/rmic (rmic) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/schemagen to provide /usr/bin/schemagen (schemagen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/serialver to provide /usr/bin/serialver (serialver) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsgen to provide /usr/bin/wsgen (wsgen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsimport to provide /usr/bin/wsimport (wsimport) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/xjc to provide /usr/bin/xjc (xjc) in auto mode Setting up openoffice (3.4~oneiric) ... Setting up libatk-wrapper-java-jni:amd64 (0.30.4-0ubuntu4) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place philip@X301-2:~$ sudo apt-get install libxrandr2:i386 libxinerama1:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: linux-headers-3.5.0-17 Use 'apt-get autoremove' to remove it. The following extra packages will be installed: gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxrender1:i386 Suggested packages: glibc-doc:i386 locales:i386 The following NEW packages will be installed gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxinerama1:i386 libxrandr2:i386 libxrender1:i386 0 upgraded, 11 newly installed, 0 to remove and 93 not upgraded. Need to get 4,936 kB of archives. After this operation, 11.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal/main gcc-4.7-base i386 4.7.2-2ubuntu1 [15.5 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libc6 i386 2.15-0ubuntu20 [3,940 kB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgcc1 i386 1:4.7.2-2ubuntu1 [53.5 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau6 i386 1:1.0.7-1 [8,582 B] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp6 i386 1:1.1.1-1 [13.1 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1 i386 1.8.1-1ubuntu1 [48.7 kB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-6 i386 2:1.5.0-1 [776 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxext6 i386 2:1.3.1-2 [33.9 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxinerama1 i386 2:1.1.2-1 [8,118 B] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrender1 i386 1:0.9.7-1 [20.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrandr2 i386 2:1.4.0-1 [18.8 kB] Fetched 4,936 kB in 30s (161 kB/s) Preconfiguring packages ... Selecting previously unselected package gcc-4.7-base:i386. (Reading database ... 146005 files and directories currently installed.) Unpacking gcc-4.7-base:i386 (from .../gcc-4.7-base_4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libc6:i386. Unpacking libc6:i386 (from .../libc6_2.15-0ubuntu20_i386.deb) ... Selecting previously unselected package libgcc1:i386. Unpacking libgcc1:i386 (from .../libgcc1_1%3a4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libxau6:i386. Unpacking libxau6:i386 (from .../libxau6_1%3a1.0.7-1_i386.deb) ... Selecting previously unselected package libxdmcp6:i386. Unpacking libxdmcp6:i386 (from .../libxdmcp6_1%3a1.1.1-1_i386.deb) ... Selecting previously unselected package libxcb1:i386. Unpacking libxcb1:i386 (from .../libxcb1_1.8.1-1ubuntu1_i386.deb) ... Selecting previously unselected package libx11-6:i386. Unpacking libx11-6:i386 (from .../libx11-6_2%3a1.5.0-1_i386.deb) ... Selecting previously unselected package libxext6:i386. Unpacking libxext6:i386 (from .../libxext6_2%3a1.3.1-2_i386.deb) ... Selecting previously unselected package libxinerama1:i386. Unpacking libxinerama1:i386 (from .../libxinerama1_2%3a1.1.2-1_i386.deb) ... Selecting previously unselected package libxrender1:i386. Unpacking libxrender1:i386 (from .../libxrender1_1%3a0.9.7-1_i386.deb) ... Selecting previously unselected package libxrandr2:i386. Unpacking libxrandr2:i386 (from .../libxrandr2_2%3a1.4.0-1_i386.deb) ... Setting up gcc-4.7-base:i386 (4.7.2-2ubuntu1) ... Setting up libc6:i386 (2.15-0ubuntu20) ... Setting up libgcc1:i386 (1:4.7.2-2ubuntu1) ... Setting up libxau6:i386 (1:1.0.7-1) ... Setting up libxdmcp6:i386 (1:1.1.1-1) ... Setting up libxcb1:i386 (1.8.1-1ubuntu1) ... Setting up libx11-6:i386 (2:1.5.0-1) ... Setting up libxext6:i386 (2:1.3.1-2) ... Setting up libxinerama1:i386 (2:1.1.2-1) ... Setting up libxrender1:i386 (1:0.9.7-1) ... Setting up libxrandr2:i386 (2:1.4.0-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place $ sudo chmod a+rx /opt/openoffice.org3/share/uno_packages/cache/uno_packages chmod: cannot access `/opt/openoffice.org3/share/uno_packages/cache/uno_packages': No such file or directory

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • Java Resources for Windows Azure

    - by BuckWoody
    Windows Azure is a Platform as a Service – a PaaS – that runs code you write. That code doesn’t just mean the languages on the .NET platform – you can run code from multiple languages, including Java. In fact, you can develop for Windows and SQL Azure using not only Visual Studio but the Eclipse Integrated Development Environment (IDE) as well.  Although not an exhaustive list, here are several links that deal with Java and Windows Azure: Resource Link Windows Azure Java Development Center http://www.windowsazure.com/en-us/develop/java/  Java Development Guidance http://msdn.microsoft.com/en-us/library/hh690943(VS.103).aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Run Java with Jetty in Windows Azure http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx  Using the plugin for Eclipse http://blogs.msdn.com/b/craig/archive/2011/03/22/new-plugin-for-eclipse-to-get-java-developers-off-the-ground-with-windows-azure.aspx  Run Java with GlassFish in Windows Azure http://blogs.msdn.com/b/dachou/archive/2011/01/17/run-java-with-glassfish-in-windows-azure.aspx  Improving experience for Java developers with Windows  Azure http://blogs.msdn.com/b/interoperability/archive/2011/02/23/improving-experience-for-java-developers-with-windows-azure.aspx  Java Access to SQL Azure via the JDBC Driver for SQL  Server http://blogs.msdn.com/b/brian_swan/archive/2011/03/29/java-access-to-sql-azure-via-the-jdbc-driver-for-sql-server.aspx  How to Get Started with Java, Tomcat on Windows Azure http://blogs.msdn.com/b/usisvde/archive/2011/03/04/how-to-get-started-with-java-tomcat-on-windows-azure.aspx  Deploying Java Applications in Azure http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  Using the Windows Azure Storage Explorer in Eclipse http://blogs.msdn.com/b/brian_swan/archive/2011/01/11/using-the-windows-azure-storage-explorer-in-eclipse.aspx  Windows Azure Tomcat Solution Accelerator http://archive.msdn.microsoft.com/winazuretomcat  Deploying a Java application to Windows Azure with  Command-line Ant http://java.interoperabilitybridges.com/articles/deploying-a-java-application-to-windows-azure-with-command-line-ant  Video: Open in the Cloud: Windows Azure and Java http://channel9.msdn.com/Events/PDC/PDC10/CS10  AzureRunMe  http://azurerunme.codeplex.com/  Windows Azure SDK for Java http://www.interoperabilitybridges.com/projects/windows-azure-sdk-for-java  AppFabric SDK for Java http://www.interoperabilitybridges.com/projects/azure-java-sdk-for-net-services  Information Cards for Java http://www.interoperabilitybridges.com/projects/information-card-for-java  Apache Stonehenge http://www.interoperabilitybridges.com/projects/apache-stonehenge  Channel 9 Case Study on Java and Windows Azure http://www.microsoft.com/casestudies/Windows-Azure/Gigaspaces/Solution-Provider-Streamlines-Java-Application-Deployment-in-the-Cloud/400000000081   

    Read the article

  • Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal

    - by Murali Veligeti
    We need to understand the main difference between portlet workflow and servlet workflow.The main difference between portlet workflow and servlet workflow is that, the request to the portlet can have two distinct phases: 1) Action phase 2) Render phase. The Action phase is executed only once and is where any 'backend' changes or actions occur, such as making changes in a database. The Render phase then produces what is displayed to the user each time the display is refreshed. The critical point here is that for a single overall request, the action phase is executed only once, but the render phase may be executed multiple times. This provides a clean separation between the activities that modify the persistent state of your system and the activities that generate what is displayed to the user.The dual phases of portlet requests are one of the real strengths of the JSR-168 specification. For example, dynamic search results can be updated routinely on the display without the user explicitly re-running the search. Most other portlet MVC frameworks attempt to completely hide the two phases from the developer and make it look as much like traditional servlet development as possible - we think this approach removes one of the main benefits of using portlets. So, the separation of the two phases is preserved throughout the Spring Portlet MVC framework. The primary manifestation of this approach is that where the servlet version of the MVC classes will have one method that deals with the request, the portlet version of the MVC classes will have two methods that deal with the request: one for the action phase and one for the render phase. For example, where the servlet version of AbstractController has the handleRequestInternal(..) method, the portlet version of AbstractController has handleActionRequestInternal(..) and handleRenderRequestInternal(..) methods.The Spring Portlet Framework is designed around a DispatcherPortlet that dispatches requests to handlers, with configurable handler mappings and view resolution, just as the DispatcherServlet in the Spring Web Framework does.  Developing portlet.xml Let's start the sample development by creating the portlet.xml file in the /WebContent/WEB-INF/ folder as shown below: <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <portlet> <portlet-name>SpringPortletName</portlet-name> <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> </supports> <portlet-info> <title>SpringPortlet</title> </portlet-info> </portlet> </portlet-app> DispatcherPortlet is responsible for handling every client request. When it receives a request, it finds out which Controller class should be used for handling this request, and then it calls its handleActionRequest() or handleRenderRequest() method based on the request processing phase. The Controller class executes business logic and returns a View name that should be used for rendering markup to the user. The DispatcherPortlet then forwards control to that View for actual markup generation. As you can see, DispatcherPortlet is the central dispatcher for use within Spring Portlet MVC Framework. Note that your portlet application can define more than one DispatcherPortlet. If it does so, then each of these portlets operates its own namespace, loading its application context and handler mapping. The DispatcherPortlet is also responsible for loading application context (Spring configuration file) for this portlet. First, it tries to check the value of the configLocation portlet initialization parameter. If that parameter is not specified, it takes the portlet name (that is, the value of the <portlet-name> element), appends "-portlet.xml" to it, and tries to load that file from the /WEB-INF folder. In the portlet.xml file, we did not specify the configLocation initialization parameter, so let's create SpringPortletName-portlet.xml file in the next section. Developing SpringPortletName-portlet.xml Create the SpringPortletName-portlet.xml file in the /WebContent/WEB-INF folder of your application as shown below: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/jsp/"/> <property name="suffix" value=".jsp"/> </bean> <bean id="pointManager" class="com.wlp.spring.bo.internal.PointManagerImpl"> <property name="users"> <list> <ref bean="point1"/> <ref bean="point2"/> <ref bean="point3"/> <ref bean="point4"/> </list> </property> </bean> <bean id="point1" class="com.wlp.spring.bean.User"> <property name="name" value="Murali"/> <property name="points" value="6"/> </bean> <bean id="point2" class="com.wlp.spring.bean.User"> <property name="name" value="Sai"/> <property name="points" value="13"/> </bean> <bean id="point3" class="com.wlp.spring.bean.User"> <property name="name" value="Rama"/> <property name="points" value="43"/> </bean> <bean id="point4" class="com.wlp.spring.bean.User"> <property name="name" value="Krishna"/> <property name="points" value="23"/> </bean> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages"/> </bean> <bean name="/users.htm" id="userController" class="com.wlp.spring.controller.UserController"> <property name="pointManager" ref="pointManager"/> </bean> <bean name="/pointincrease.htm" id="pointIncreaseController" class="com.wlp.spring.controller.IncreasePointsFormController"> <property name="sessionForm" value="true"/> <property name="pointManager" ref="pointManager"/> <property name="commandName" value="pointIncrease"/> <property name="commandClass" value="com.wlp.spring.bean.PointIncrease"/> <property name="formView" value="pointincrease"/> <property name="successView" value="users"/> </bean> <bean id="parameterMappingInterceptor" class="org.springframework.web.portlet.handler.ParameterMappingInterceptor" /> <bean id="portletModeParameterHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeParameterHandlerMapping"> <property name="order" value="1" /> <property name="interceptors"> <list> <ref bean="parameterMappingInterceptor" /> </list> </property> <property name="portletModeParameterMap"> <map> <entry key="view"> <map> <entry key="pointincrease"> <ref bean="pointIncreaseController" /> </entry> <entry key="users"> <ref bean="userController" /> </entry> </map> </entry> </map> </property> </bean> <bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping"> <property name="order" value="2" /> <property name="portletModeMap"> <map> <entry key="view"> <ref bean="userController" /> </entry> </map> </property> </bean> </beans> The SpringPortletName-portlet.xml file is an application context file for your MVC portlet. It has a couple of bean definitions: viewController. At this point, remember that the viewController bean definition points to the com.ibm.developerworks.springmvc.ViewController.java class. portletModeHandlerMapping. As we discussed in the last section, whenever DispatcherPortlet gets a client request, it tries to find a suitable Controller class for handling that request. That is where PortletModeHandlerMapping comes into the picture. The PortletModeHandlerMapping class is a simple implementation of the HandlerMapping interface and is used by DispatcherPortlet to find a suitable Controller for every request. The PortletModeHandlerMapping class uses Portlet mode for the current request to find a suitable Controller class to use for handling the request. The portletModeMap property of portletModeHandlerMapping bean is the place where we map the Portlet mode name against the Controller class. In the sample code, we show that viewController is responsible for handling View mode requests. Developing UserController.java In the preceding section, you learned that the viewController bean is responsible for handling all the View mode requests. Your next step is to create the UserController.java class as shown below: public class UserController extends AbstractController { private PointManager pointManager; public void handleActionRequest(ActionRequest request, ActionResponse response) throws Exception { } public ModelAndView handleRenderRequest(RenderRequest request, RenderResponse response) throws ServletException, IOException { String now = (new java.util.Date()).toString(); Map<String, Object> myModel = new HashMap<String, Object>(); myModel.put("now", now); myModel.put("users", this.pointManager.getUsers()); return new ModelAndView("users", "model", myModel); } public void setPointManager(PointManager pointManager) { this.pointManager = pointManager; } } Every controller class in Spring Portlet MVC Framework must implement the org.springframework.web. portlet.mvc.Controller interface directly or indirectly. To make things easier, Spring Framework provides AbstractController class, which is the default implementation of the Controller interface. As a developer, you should always extend your controller from either AbstractController or one of its more specific subclasses. Any implementation of the Controller class should be reusable, thread-safe, and capable of handling multiple requests throughout the lifecycle of the portlet. In the sample code, we create the ViewController class by extending it from AbstractController. Because we don't want to do any action processing in the HelloSpringPortletMVC portlet, we override only the handleRenderRequest() method of AbstractController. Now, the only thing that HelloWorldPortletMVC should do is render the markup of View.jsp to the user when it receives a user request to do so. To do that, return the object of ModelAndView with a value of view equal to View. Developing web.xml According to Portlet Specification 1.0, every portlet application is also a Servlet Specification 2.3-compliant Web application, and it needs a Web application deployment descriptor (that is, web.xml). Let’s create the web.xml file in the /WEB-INF/ folder as shown in listing 4. Follow these steps: Open the existing web.xml file located at /WebContent/WEB-INF/web.xml. Replace the contents of this file with the code as shown below: <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> The web.xml file for the sample portlet declares two things: ViewRendererServlet. The ViewRendererServlet is the bridge servlet for portlet support. During the render phase, DispatcherPortlet wraps PortletRequest into ServletRequest and forwards control to ViewRendererServlet for actual rendering. This process allows Spring Portlet MVC Framework to use the same View infrastructure as that of its servlet version, that is, Spring Web MVC Framework. ContextLoaderListener. The ContextLoaderListener class takes care of loading Web application context at the time of the Web application startup. The Web application context is shared by all the portlets in the portlet application. In case of duplicate bean definition, the bean definition in the portlet application context takes precedence over the Web application context. The ContextLoader class tries to read the value of the contextConfigLocation Web context parameter to find out the location of the context file. If the contextConfigLocation parameter is not set, then it uses the default value, which is /WEB-INF/applicationContext.xml, to load the context file. The Portlet Controller interface requires two methods that handle the two phases of a portlet request: the action request and the render request. The action phase should be capable of handling an action request and the render phase should be capable of handling a render request and returning an appropriate model and view. While the Controller interface is quite abstract, Spring Portlet MVC offers a lot of controllers that already contain a lot of the functionality you might need – most of these are very similar to controllers from Spring Web MVC. The Controller interface just defines the most common functionality required of every controller - handling an action request, handling a render request, and returning a model and a view. How rendering works As you know, when the user tries to access a page with PointSystemPortletMVC portlet on it or when the user performs some action on any other portlet on that page or tries to refresh that page, a render request is sent to the PointSystemPortletMVC portlet. In the sample code, because DispatcherPortlet is the main portlet class, Weblogic Portal / Webcenter Portal calls its render() method and then the following sequence of events occurs: The render() method of DispatcherPortlet calls the doDispatch() method, which in turn calls the doRender() method. After the doRenderService() method gets control, first it tries to find out the locale of the request by calling the PortletRequest.getLocale() method. This locale is used while making all the locale-related decisions for choices such as which resource bundle should be loaded or which JSP should be displayed to the user based on the locale. After that, the doRenderService() method starts iterating through all the HandlerMapping classes configured for this portlet, calling their getHandler() method to identify the appropriate Controller for handling this request. In the sample code, we have configured only PortletModeHandlerMapping as a HandlerMapping class. The PortletModeHandlerMapping class reads the value of the current portlet mode, and based on that, it finds out, the Controller class that should be used to handle this request. In the sample code, ViewController is configured to handle the View mode request so that the PortletModeHandlerMapping class returns the object of ViewController. After the object of ViewController is returned, the doRenderService() method calls its handleRenderRequestInternal() method. Implementation of the handleRenderRequestInternal() method in ViewController.java is very simple. It logs a message saying that it got control, and then it creates an instance of ModelAndView with a value equal to View and returns it to DispatcherPortlet. After control returns to doRenderService(), the next task is to figure out how to render View. For that, DispatcherPortlet starts iterating through all the ViewResolvers configured in your portlet application, calling their resolveViewName() method. In the sample code we have configured only one ViewResolver, InternalResourceViewResolver. When its resolveViewName() method is called with viewName, it tries to add /WEB-INF/jsp as a prefix to the view name and to add JSP as a suffix. And it checks if /WEB-INF/jsp/View.jsp exists. If it does exist, it returns the object of JstlView wrapping View.jsp. After control is returned to the doRenderService() method, it creates the object PortletRequestDispatcher, which points to /WEB-INF/servlet/view – that is, ViewRendererServlet. Then it sets the object of JstlView in the request and dispatches the request to ViewRendererServlet. After ViewRendererServlet gets control, it reads the JstlView object from the request attribute and creates another RequestDispatcher pointing to the /WEB-INF/jsp/View.jsp URL and passes control to it for actual markup generation. The markup generated by View.jsp is returned to user. At this point, you may question the need for ViewRendererServlet. Why can't DispatcherPortlet directly forward control to View.jsp? Adding ViewRendererServlet in between allows Spring Portlet MVC Framework to reuse the existing View infrastructure. You may appreciate this more when we discuss how easy it is to integrate Apache Tiles Framework with your Spring Portlet MVC Framework. The attached project SpringPortlet.zip should be used to import the project in to your OEPE Workspace. SpringPortlet_Jars.zip contains jar files required for the application. Project is written on Spring 2.5.  The same JSR 168 portlet should work on Webcenter Portal as well.  Downloads: Download WeblogicPotal Project which consists of Spring Portlet. Download Spring Jars In-addition to above you need to download Spring.jar (Spring2.5)

    Read the article

  • Global Cache CR Requested But Current Block Received

    - by Liu Maclean(???)
    ????????«MINSCN?Cache Fusion Read Consistent» ????,???????????? ??????????????????: SQL> select * from V$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select count(*) from gv$instance; COUNT(*) ---------- 2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com ?11gR2 2??RAC??????????status???XG,????Xcurrent block???INSTANCE 2?hold?,?????INSTANCE 1?????????,?????: SQL> select * from test; ID ---------- 1 2 SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from test; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------ 89233 1 89233 1 SQL> alter system flush buffer_cache; System altered. INSTANCE 1 Session A: SQL> update test set id=id+1 where id=1; 1 row updated. INSTANCE 1 Session B: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1755287 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump gc_elements 255; Statement processed. SQL> oradebug tracefile_name; /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_ora_19111.trc GLOBAL CACHE ELEMENT DUMP (address: 0xa4ff3080): id1: 0x15c91 id2: 0x1 pkey: OBJ#76896 block: (1/89233) lock: X rls: 0x0 acq: 0x0 latch: 3 flags: 0x20 fair: 0 recovery: 0 fpin: 'kdswh11: kdst_fetch' bscn: 0x0.146e20 bctx: (nil) write: 0 scan: 0x0 lcp: (nil) lnk: [NULL] lch: [0xa9f6a6f8,0xa9f6a6f8] seq: 32 hist: 58 145:0 118 66 144:0 192 352 197 48 121 113 424 180 58 LIST OF BUFFERS LINKED TO THIS GLOBAL CACHE ELEMENT: flg: 0x02000001 lflg: 0x1 state: XCURRENT tsn: 0 tsh: 2 addr: 0xa9f6a5c8 obj: 76896 cls: DATA bscn: 0x0.1ac898 BH (0xa9f6a5c8) file#: 1 rdba: 0x00415c91 (1/89233) class: 1 ba: 0xa9e56000 set: 5 pool: 3 bsz: 8192 bsi: 0 sflg: 3 pwc: 0,15 dbwrid: 0 obj: 76896 objn: 76896 tsn: 0 afn: 1 hint: f hash: [0x91f4e970,0xbae9d5b8] lru: [0x91f58848,0xa9f6a828] lru-flags: debug_dump obj-flags: object_ckpt_list ckptq: [0x9df6d1d8,0xa9f6a740] fileq: [0xa2ece670,0xbdf4ed68] objq: [0xb4964e00,0xb4964e00] objaq: [0xb4964de0,0xb4964de0] st: XCURRENT md: NULL fpin: 'kdswh11: kdst_fetch' tch: 2 le: 0xa4ff3080 flags: buffer_dirty redo_since_read LRBA: [0x19.5671.0] LSCN: [0x0.1ac898] HSCN: [0x0.1ac898] HSUB: [1] buffer tsn: 0 rdba: 0x00415c91 (1/89233) scn: 0x0000.001ac898 seq: 0x01 flg: 0x00 tail: 0xc8980601 frmt: 0x02 chkval: 0x0000 type: 0x06=trans data ??????block: (1/89233)?GLOBAL CACHE ELEMENT DUMP?LOCK????X ??XG , ??????Current Block????Instance??modify???,????????????? ????Instance 2 ????: Instance 2 Session C: SQL> update test set id=id+1 where id=2; 1 row updated. Instance 2 Session D: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1756658 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump gc_elements 255; Statement processed. SQL> oradebug tracefile_name; /s01/orabase/diag/rdbms/vprod/VPROD2/trace/VPROD2_ora_13038.trc GLOBAL CACHE ELEMENT DUMP (address: 0x89fb25a0): id1: 0x15c91 id2: 0x1 pkey: OBJ#76896 block: (1/89233) lock: XG rls: 0x0 acq: 0x0 latch: 3 flags: 0x20 fair: 0 recovery: 0 fpin: 'kduwh01: kdusru' bscn: 0x0.1acdf3 bctx: (nil) write: 0 scan: 0x0 lcp: (nil) lnk: [NULL] lch: [0x96f4cf80,0x96f4cf80] seq: 61 hist: 324 21 143:0 19 16 352 329 144:6 14 7 352 197 LIST OF BUFFERS LINKED TO THIS GLOBAL CACHE ELEMENT: flg: 0x0a000001 state: XCURRENT tsn: 0 tsh: 1 addr: 0x96f4ce50 obj: 76896 cls: DATA bscn: 0x0.1acdf6 BH (0x96f4ce50) file#: 1 rdba: 0x00415c91 (1/89233) class: 1 ba: 0x96bd4000 set: 5 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15 dbwrid: 0 obj: 76896 objn: 76896 tsn: 0 afn: 1 hint: f hash: [0x96ee1fe8,0xbae9d5b8] lru: [0x96f4d0b0,0x96f4cdc0] obj-flags: object_ckpt_list ckptq: [0xbdf519b8,0x96f4d5a8] fileq: [0xbdf519d8,0xbdf519d8] objq: [0xb4a47b90,0xb4a47b90] objaq: [0x96f4d0e8,0xb4a47b70] st: XCURRENT md: NULL fpin: 'kduwh01: kdusru' tch: 1 le: 0x89fb25a0 flags: buffer_dirty redo_since_read remote_transfered LRBA: [0x11.9e18.0] LSCN: [0x0.1acdf6] HSCN: [0x0.1acdf6] HSUB: [1] buffer tsn: 0 rdba: 0x00415c91 (1/89233) scn: 0x0000.001acdf6 seq: 0x01 flg: 0x00 tail: 0xcdf60601 frmt: 0x02 chkval: 0x0000 type: 0x06=trans data GCS CLIENT 0x89fb2618,6 resp[(nil),0x15c91.1] pkey 76896.0 grant 2 cvt 0 mdrole 0x42 st 0x100 lst 0x20 GRANTQ rl G0 master 1 owner 2 sid 0 remote[(nil),0] hist 0x94121c601163423c history 0x3c.0x4.0xd.0xb.0x1.0xc.0x7.0x9.0x14.0x1. cflag 0x0 sender 1 flags 0x0 replay# 0 abast (nil).x0.1 dbmap (nil) disk: 0x0000.00000000 write request: 0x0000.00000000 pi scn: 0x0000.00000000 sq[(nil),(nil)] msgseq 0x1 updseq 0x0 reqids[6,0,0] infop (nil) lockseq x2b8 pkey 76896.0 hv 93 [stat 0x0, 1->1, wm 32768, RMno 0, reminc 18, dom 0] kjga st 0x4, step 0.0.0, cinc 20, rmno 6, flags 0x0 lb 0, hb 0, myb 15250, drmb 15250, apifrz 0 ?Instance 2??????block: (1/89233)? GLOBAL CACHE ELEMENT Lock Convert?lock: XG ????GC_ELEMENTS DUMP???XCUR Cache Fusion?,???????X$ VIEW,??? X$LE X$KJBR X$KJBL, ???X$ VIEW???????????????????: INSTANCE 2 Session D: SELECT * FROM x$le WHERE le_addr IN (SELECT le_addr FROM x$bh WHERE obj IN (SELECT data_object_id FROM dba_objects WHERE owner = 'SYS' AND object_name = 'TEST') AND class = 1 AND state != 3); ADDR INDX INST_ID LE_ADDR LE_ID1 LE_ID2 ---------------- ---------- ---------- ---------------- ---------- ---------- LE_RLS LE_ACQ LE_FLAGS LE_MODE LE_WRITE LE_LOCAL LE_RECOVERY ---------- ---------- ---------- ---------- ---------- ---------- ----------- LE_BLKS LE_TIME LE_KJBL ---------- ---------- ---------------- 00007F94CA14CF60 7003 2 0000000089FB25A0 89233 1 0 0 32 2 0 1 0 1 0 0000000089FB2618 PCM Resource NAME?[ID1][ID2],[BL]???, ID1?ID2 ??blockno? fileno????, ??????????GC_elements dump?? id1: 0x15c91 id2: 0×1 pkey: OBJ#76896 block: (1/89233)?? ,?  kjblname ? kjbrname ??”[0x15c91][0x1],[BL]” ??: INSTANCE 2 Session D: SQL> set linesize 80 pagesize 1400 SQL> SELECT * 2 FROM x$kjbl l 3 WHERE l.kjblname LIKE '%[0x15c91][0x1],[BL]%'; ADDR INDX INST_ID KJBLLOCKP KJBLGRANT KJBLREQUE ---------------- ---------- ---------- ---------------- --------- --------- KJBLROLE KJBLRESP KJBLNAME ---------- ---------------- ------------------------------ KJBLNAME2 KJBLQUEUE ------------------------------ ---------- KJBLLOCKST KJBLWRITING ---------------------------------------------------------------- ----------- KJBLREQWRITE KJBLOWNER KJBLMASTER KJBLBLOCKED KJBLBLOCKER KJBLSID KJBLRDOMID ------------ ---------- ---------- ----------- ----------- ---------- ---------- KJBLPKEY ---------- 00007F94CA22A288 451 2 0000000089FB2618 KJUSEREX KJUSERNL 0 00 [0x15c91][0x1],[BL][ext 0x0,0x 89233,1,BL 0 GRANTED 0 0 1 0 0 0 0 0 76896 SQL> SELECT r.* FROM x$kjbr r WHERE r.kjbrname LIKE '%[0x15c91][0x1],[BL]%'; no rows selected Instance 1 session B: SQL> SELECT r.* FROM x$kjbr r WHERE r.kjbrname LIKE '%[0x15c91][0x1],[BL]%'; ADDR INDX INST_ID KJBRRESP KJBRGRANT KJBRNCVL ---------------- ---------- ---------- ---------------- --------- --------- KJBRROLE KJBRNAME KJBRMASTER KJBRGRANTQ ---------- ------------------------------ ---------- ---------------- KJBRCVTQ KJBRWRITER KJBRSID KJBRRDOMID KJBRPKEY ---------------- ---------------- ---------- ---------- ---------- 00007F801ACA68F8 1355 1 00000000B5A62AE0 KJUSEREX KJUSERNL 0 [0x15c91][0x1],[BL][ext 0x0,0x 0 00000000B48BB330 00 00 0 0 76896 ??????Instance 1???block: (1/89233),??????Instance 2 build cr block ????Instance 1, ?????????? ????? Instance 1? Foreground Process ? Instance 2?LMS??????RAC  TRACE: Instance 2: [oracle@vrh2 ~]$ ps -ef|grep ora_lms|grep -v grep oracle 23364 1 0 Apr29 ? 00:33:15 ora_lms0_VPROD2 SQL> oradebug setospid 23364 Oracle pid: 13, Unix process pid: 23364, image: [email protected] (LMS0) SQL> oradebug event 10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high; Statement processed. SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD2/trace/VPROD2_lms0_23364.trc Instance 1 session B : SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1756658 3 1756661 3 1755287 Instance 1 session A : SQL> alter session set events '10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high'; Session altered. SQL> select * from test; ID ---------- 2 2 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1761520 ?x$BH?????,???????Instance 1???build??CR block,????? TRACE ??: Instance 1 foreground Process: PARSING IN CURSOR #140336527348792 len=18 dep=0 uid=0 oct=3 lid=0 tim=1335939136125254 hv=1689401402 ad='b1a4c828' sqlid='c99yw1xkb4f1u' select * from test END OF STMT PARSE #140336527348792:c=2999,e=2860,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=1357081020,tim=1335939136125253 EXEC #140336527348792:c=0,e=40,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939136125373 WAIT #140336527348792: nam='SQL*Net message to client' ela= 6 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1335939136125420 *** 2012-05-02 02:12:16.125 kclscrs: req=0 block=1/89233 2012-05-02 02:12:16.125574 : kjbcro[0x15c91.1 76896.0][4] *** 2012-05-02 02:12:16.125 kclscrs: req=0 typ=nowait-abort *** 2012-05-02 02:12:16.125 kclscrs: bid=1:3:1:0:f:1e:0:0:10:0:0:0:1:2:4:1:20:0:0:0:c3:49:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:4:3:2:1:2:0:1c:0:4d:26:a3:52:0:0:0:0:c7:c:ca:62:c3:49:0:0:0:0:1:0:14:8e:47:76:1:2:dc:5:a9:fe:17:75:0:0:0:0:0:0:0:0:0:0:0:0:99:ed:0:0:0:0:0:0:10:0:0:0 2012-05-02 02:12:16.125718 : kjbcro[0x15c91.1 76896.0][4] 2012-05-02 02:12:16.125751 : GSIPC:GMBQ: buff 0xba0ee018, queue 0xbb79a7b8, pool 0x60013fa0, freeq 0, nxt 0xbb79a7b8, prv 0xbb79a7b8 2012-05-02 02:12:16.125780 : kjbsentscn[0x0.1ae0f0][to 2] 2012-05-02 02:12:16.125806 : GSIPC:SENDM: send msg 0xba0ee088 dest x20001 seq 177740 type 36 tkts xff0000 mlen x1680198 2012-05-02 02:12:16.125918 : kjbmscr(0x15c91.1)reqid=0x8(req 0xa4ff30f8)(rinst 1)hldr 2(infosz 200)(lseq x2b8) 2012-05-02 02:12:16.126959 : GSIPC:KSXPCB: msg 0xba0ee088 status 30, type 36, dest 2, rcvr 1 *** 2012-05-02 02:12:16.127 kclwcrs: wait=0 tm=1233 *** 2012-05-02 02:12:16.127 kclwcrs: got 1 blocks from ksxprcv WAIT #140336527348792: nam='gc cr block 2-way' ela= 1233 p1=1 p2=89233 p3=1 obj#=76896 tim=1335939136127199 2012-05-02 02:12:16.127272 : kjbcrcomplete[0x15c91.1 76896.0][0] 2012-05-02 02:12:16.127309 : kjbrcvdscn[0x0.1ae0f0][from 2][idx 2012-05-02 02:12:16.127329 : kjbrcvdscn[no bscn <= rscn 0x0.1ae0f0][from 2] ???? kjbcro[0x15c91.1 76896.0][4] kjbsentscn[0x0.1ae0f0][to 2] ?Instance 2??SCN=1ae0f0=1761520? block: (1/89233),???’gc cr block 2-way’ ??,?????????CR block? Instance 2 LMS TRACE 2012-05-02 02:12:15.634057 : GSIPC:RCVD: ksxp msg 0x7f16e1598588 sndr 1 seq 0.177740 type 36 tkts 0 2012-05-02 02:12:15.634094 : GSIPC:RCVD: watq msg 0x7f16e1598588 sndr 1, seq 177740, type 36, tkts 0 2012-05-02 02:12:15.634108 : GSIPC:TKT: collect msg 0x7f16e1598588 from 1 for rcvr -1, tickets 0 2012-05-02 02:12:15.634162 : kjbrcvdscn[0x0.1ae0f0][from 1][idx 2012-05-02 02:12:15.634186 : kjbrcvdscn[no bscn1, wm 32768, RMno 0, reminc 18, dom 0] kjga st 0x4, step 0.0.0, cinc 20, rmno 6, flags 0x0 lb 0, hb 0, myb 15250, drmb 15250, apifrz 0 GCS CLIENT END 2012-05-02 02:12:15.635211 : kjbdowncvt[0x15c91.1 76896.0][1][options x0] 2012-05-02 02:12:15.635230 : GSIPC:AMBUF: rcv buff 0x7f16e1c56420, pool rcvbuf, rqlen 1103 2012-05-02 02:12:15.635308 : GSIPC:GPBMSG: new bmsg 0x7f16e1c56490 mb 0x7f16e1c56420 msg 0x7f16e1c564b0 mlen 152 dest x101 flushsz -1 2012-05-02 02:12:15.635334 : kjbmslset(0x15c91.1)) seq 0x4 reqid=0x6 (shadow 0xb48bb330.xb)(rsn 2)(mas@1) 2012-05-02 02:12:15.635355 : GSIPC:SPBMSG: send bmsg 0x7f16e1c56490 blen 184 msg 0x7f16e1c564b0 mtype 33 attr|dest x30101 bsz|fsz x1ffff 2012-05-02 02:12:15.635377 : GSIPC:SNDQ: enq msg 0x7f16e1c56490, type 65521 seq 118669, inst 1, receiver 1, queued 1 *** 2012-05-02 02:12:15.635 kclccctx: cleanup copy 0x7f16e1d94798 2012-05-02 02:12:15.635479 : [kjmpmsgi:compl][type 36][msg 0x7f16e1598588][seq 177740.0][qtime 0][ptime 1257] 2012-05-02 02:12:15.635511 : GSIPC:BSEND: flushing sndq 0xb491dd28, id 1, dcx 0xbc516778, inst 1, rcvr 1 qlen 0 1 2012-05-02 02:12:15.635536 : GSIPC:BSEND: no batch1 msg 0x7f16e1c56490 type 65521 len 184 dest (1:1) 2012-05-02 02:12:15.635557 : kjbsentscn[0x0.1ae0f1][to 1] 2012-05-02 02:12:15.635578 : GSIPC:SENDM: send msg 0x7f16e1c56490 dest x10001 seq 118669 type 65521 tkts x10002 mlen xb800e8 WAIT #0: nam='gcs remote message' ela= 180 waittime=1 poll=0 event=0 obj#=0 tim=1335939135635819 2012-05-02 02:12:15.635853 : GSIPC:RCVD: ksxp msg 0x7f16e167e0b0 sndr 1 seq 0.177741 type 32 tkts 0 2012-05-02 02:12:15.635875 : GSIPC:RCVD: watq msg 0x7f16e167e0b0 sndr 1, seq 177741, type 32, tkts 0 2012-05-02 02:12:15.636012 : GSIPC:TKT: collect msg 0x7f16e167e0b0 from 1 for rcvr -1, tickets 0 2012-05-02 02:12:15.636040 : kjbrcvdscn[0x0.1ae0f1][from 1][idx 2012-05-02 02:12:15.636060 : kjbrcvdscn[no bscn <= rscn 0x0.1ae0f1][from 1] 2012-05-02 02:12:15.636082 : GSIPC:TKT: dest (1:1) rtkt not acked 1  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:12:15.636102 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:12:15.636125 : [kjmxmpm][type 32][seq 0.177741][msg 0x7f16e167e0b0][from 1] 2012-05-02 02:12:15.636146 : kjbmpocr(0xb0.6)seq 0x1,reqid=0x23a,(client 0x9fff7b58,0x1)(from 1)(lseq xdf0) 2????LMS????????? ??gcs remote message GSIPC ????SCN=[0x0.1ae0f0] block=1/89233???,??BAST kjbmpbast(0x15c91.1),?? block=1/89233??????? ??fairness??(?11.2.0.3???_fairness_threshold=2),?current block?KCL: F156: fairness downconvert,?Xcurrent DownConvert? Scurrent: Instance 2: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 2 0 3 1756658 ??Instance 2 LMS ?cr block??? kjbmslset(0x15c91.1)) ????SEND QUEUE GSIPC:SNDQ: enq msg 0x7f16e1c56490? ???????Instance 1???? block: (1/89233)??? ??????: Instance 2: SQL> select CURRENT_RESULTS,LIGHT_WORKS from v$cr_block_server; CURRENT_RESULTS LIGHT_WORKS --------------- ----------- 29273 437 Instance 1 session A: SQL> SQL> select * from test; ID ---------- 2 2 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1761942 3 1761932 1 0 3 1761520 Instance 2: SQL> select CURRENT_RESULTS,LIGHT_WORKS from v$cr_block_server; CURRENT_RESULTS LIGHT_WORKS --------------- ----------- 29274 437 select * from test END OF STMT PARSE #140336529675592:c=0,e=337,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939668940051 EXEC #140336529675592:c=0,e=96,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335939668940204 WAIT #140336529675592: nam='SQL*Net message to client' ela= 5 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1335939668940348 *** 2012-05-02 02:21:08.940 kclscrs: req=0 block=1/89233 2012-05-02 02:21:08.940676 : kjbcro[0x15c91.1 76896.0][5] *** 2012-05-02 02:21:08.940 kclscrs: req=0 typ=nowait-abort *** 2012-05-02 02:21:08.940 kclscrs: bid=1:3:1:0:f:21:0:0:10:0:0:0:1:2:4:1:20:0:0:0:c3:49:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:4:3:2:1:2:0:1f:0:4d:26:a3:52:0:0:0:0:c7:c:ca:62:c3:49:0:0:0:0:1:0:17:8e:47:76:1:2:dc:5:a9:fe:17:75:0:0:0:0:0:0:0:0:0:0:0:0:99:ed:0:0:0:0:0:0:10:0:0:0 2012-05-02 02:21:08.940799 : kjbcro[0x15c91.1 76896.0][5] 2012-05-02 02:21:08.940833 : GSIPC:GMBQ: buff 0xba0ee018, queue 0xbb79a7b8, pool 0x60013fa0, freeq 0, nxt 0xbb79a7b8, prv 0xbb79a7b8 2012-05-02 02:21:08.940859 : kjbsentscn[0x0.1ae28c][to 2] 2012-05-02 02:21:08.940870 : GSIPC:SENDM: send msg 0xba0ee088 dest x20001 seq 177810 type 36 tkts xff0000 mlen x1680198 2012-05-02 02:21:08.940976 : kjbmscr(0x15c91.1)reqid=0xa(req 0xa4ff30f8)(rinst 1)hldr 2(infosz 200)(lseq x2b8) 2012-05-02 02:21:08.941314 : GSIPC:KSXPCB: msg 0xba0ee088 status 30, type 36, dest 2, rcvr 1 *** 2012-05-02 02:21:08.941 kclwcrs: wait=0 tm=707 *** 2012-05-02 02:21:08.941 kclwcrs: got 1 blocks from ksxprcv 2012-05-02 02:21:08.941818 : kjbassume[0x15c91.1][sender 2][mymode x1][myrole x0][srole x0][flgs x0][spiscn 0x0.0][swscn 0x0.0] 2012-05-02 02:21:08.941852 : kjbrcvdscn[0x0.1ae28d][from 2][idx 2012-05-02 02:21:08.941871 : kjbrcvdscn[no bscn ??????????????SCN=[0x0.1ae28c]=1761932 Version?CR block, ????receive????Xcurrent Block??SCN=1ae28d=1761933,Instance 1???Xcurrent Block???build????????SCN=1761932?CR BLOCK, ????????Current block,?????????'gc current block 2-way'? ?????????????request current block,?????kjbcro;?????Instance 2?LMS???????Current Block: Instance 2 LMS trace: 2012-05-02 02:21:08.448743 : GSIPC:RCVD: ksxp msg 0x7f16e14a4398 sndr 1 seq 0.177810 type 36 tkts 0 2012-05-02 02:21:08.448778 : GSIPC:RCVD: watq msg 0x7f16e14a4398 sndr 1, seq 177810, type 36, tkts 0 2012-05-02 02:21:08.448798 : GSIPC:TKT: collect msg 0x7f16e14a4398 from 1 for rcvr -1, tickets 0 2012-05-02 02:21:08.448816 : kjbrcvdscn[0x0.1ae28c][from 1][idx 2012-05-02 02:21:08.448834 : kjbrcvdscn[no bscn <= rscn 0x0.1ae28c][from 1] 2012-05-02 02:21:08.448857 : GSIPC:TKT: dest (1:1) rtkt not acked 2  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:21:08.448875 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:21:08.448970 : [kjmxmpm][type 36][seq 0.177810][msg 0x7f16e14a4398][from 1] 2012-05-02 02:21:08.448993 : kjbmpbast(0x15c91.1) reqid=0x6 (req 0xa4ff30f8)(reqinst 1)(reqid 10)(flags x0) *** 2012-05-02 02:21:08.449 kclcrrf: req=48054 block=1/89233 *** 2012-05-02 02:21:08.449 kcl_compress_block: compressed: 6 free space: 7680 2012-05-02 02:21:08.449085 : kjbsentscn[0x0.1ae28d][to 1] 2012-05-02 02:21:08.449142 : kjbdeliver[to 1][0xa4ff30f8][10][current 1] 2012-05-02 02:21:08.449164 : kjbmssch(reqlock 0xa4ff30f8,10)(to 1)(bsz 344) 2012-05-02 02:21:08.449183 : GSIPC:AMBUF: rcv buff 0x7f16e18bcec8, pool rcvbuf, rqlen 1102 *** 2012-05-02 02:21:08.449 kclccctx: cleanup copy 0x7f16e1d94838 *** 2012-05-02 02:21:08.449 kcltouched: touch seconds 3271 *** 2012-05-02 02:21:08.449 kclgrantlk: req=48054 2012-05-02 02:21:08.449347 : [kjmpmsgi:compl][type 36][msg 0x7f16e14a4398][seq 177810.0][qtime 0][ptime 1119] WAIT #0: nam='gcs remote message' ela= 568 waittime=1 poll=0 event=0 obj#=0 tim=1335939668449962 2012-05-02 02:21:08.450001 : GSIPC:RCVD: ksxp msg 0x7f16e1bb22a0 sndr 1 seq 0.177811 type 32 tkts 0 2012-05-02 02:21:08.450024 : GSIPC:RCVD: watq msg 0x7f16e1bb22a0 sndr 1, seq 177811, type 32, tkts 0 2012-05-02 02:21:08.450043 : GSIPC:TKT: collect msg 0x7f16e1bb22a0 from 1 for rcvr -1, tickets 0 2012-05-02 02:21:08.450060 : kjbrcvdscn[0x0.1ae28e][from 1][idx 2012-05-02 02:21:08.450078 : kjbrcvdscn[no bscn <= rscn 0x0.1ae28e][from 1] 2012-05-02 02:21:08.450097 : GSIPC:TKT: dest (1:1) rtkt not acked 3  unassigned bufs 0  tkts 0  newbufs 0 2012-05-02 02:21:08.450116 : GSIPC:TKT: remove ctx dest (1:1) 2012-05-02 02:21:08.450136 : [kjmxmpm][type 32][seq 0.177811][msg 0x7f16e1bb22a0][from 1] 2012-05-02 02:21:08.450155 : kjbmpocr(0xb0.6)seq 0x1,reqid=0x23e,(client 0x9fff7b58,0x1)(from 1)(lseq xdf4) ???Instance 2??LMS???,???build cr block,??????Instance 1?????Current Block??????Instance 2??v$cr_block_server??????LIGHT_WORKS?????current block transfer??????,??????? CR server? Light Work Rule(Light Work Rule?8i Cr Server?????????,?Remote LMS?? build CR????????,resource holder?LMS???????block,????CR build If creating the consistent read version block involves too much work (such as reading blocks from disk), then the holder sends the block to the requestor, and the requestor completes the CR fabrication. The holder maintains a fairness counter of CR requests. After the fairness threshold is reached, the holder downgrades it to lock mode.)? ??????? CR Request ????Current Block?? ???:??????class?block,CR server??????? ??undo block?? undo header block?CR quest, LMS????Current Block, ????? ???? ??????? block cleanout? CR  Version??????? ???????? data blocks, ??????? CR quest  & CR received?(???????Light Work Rule,LMS"??"), ??Current Block??DownConvert???S lock,??LMS???????ship??current version?block? ??????? , ?????? ,???????DownConvert?????”_fairness_threshold“???200,????Xcurrent Block?????Scurrent, ????LMS?????Current Version?Data Block: SQL> show parameter fair NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ _fairness_threshold integer 200 Instance 1: SQL> update test set id=id+1 where id=4; 1 row updated. Instance 2: SQL> update test set id=id+1 where id=2; 1 row updated. SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1838166 ?Instance 1? ????,? ??instance 2? v$cr_block_server?? instance 1 SQL> select * from test; ID ---------- 10 3 instance 2: SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1883707 8 0 SQL> select * from test; ID ---------- 10 3 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1883707 8 0 ................... SQL> / STATE CR_SCN_BAS ---------- ---------- 2 0 3 1883707 3 1883695 repeat cr request on Instance 1 SQL> / STATE CR_SCN_BAS ---------- ---------- 8 0 3 1883707 3 1883695 ??????_fairness_threshold????????,?????200 ????????CR serve??Downgrade?lock, ????data block? CR Request????Receive? Current Block?

    Read the article

  • Java Resources for Windows Azure

    - by BuckWoody
    Windows Azure is a Platform as a Service – a PaaS – that runs code you write. That code doesn’t just mean the languages on the .NET platform – you can run code from multiple languages, including Java. In fact, you can develop for Windows and SQL Azure using not only Visual Studio but the Eclipse Integrated Development Environment (IDE) as well.  Although not an exhaustive list, here are several links that deal with Java and Windows Azure: Resource Link Windows Azure Java Development Center http://www.windowsazure.com/en-us/develop/java/  Java Development Guidance http://msdn.microsoft.com/en-us/library/hh690943(VS.103).aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Run Java with Jetty in Windows Azure http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx  Using the plugin for Eclipse http://blogs.msdn.com/b/craig/archive/2011/03/22/new-plugin-for-eclipse-to-get-java-developers-off-the-ground-with-windows-azure.aspx  Run Java with GlassFish in Windows Azure http://blogs.msdn.com/b/dachou/archive/2011/01/17/run-java-with-glassfish-in-windows-azure.aspx  Improving experience for Java developers with Windows  Azure http://blogs.msdn.com/b/interoperability/archive/2011/02/23/improving-experience-for-java-developers-with-windows-azure.aspx  Java Access to SQL Azure via the JDBC Driver for SQL  Server http://blogs.msdn.com/b/brian_swan/archive/2011/03/29/java-access-to-sql-azure-via-the-jdbc-driver-for-sql-server.aspx  How to Get Started with Java, Tomcat on Windows Azure http://blogs.msdn.com/b/usisvde/archive/2011/03/04/how-to-get-started-with-java-tomcat-on-windows-azure.aspx  Deploying Java Applications in Azure http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  Using the Windows Azure Storage Explorer in Eclipse http://blogs.msdn.com/b/brian_swan/archive/2011/01/11/using-the-windows-azure-storage-explorer-in-eclipse.aspx  Windows Azure Tomcat Solution Accelerator http://archive.msdn.microsoft.com/winazuretomcat  Deploying a Java application to Windows Azure with  Command-line Ant http://java.interoperabilitybridges.com/articles/deploying-a-java-application-to-windows-azure-with-command-line-ant  Video: Open in the Cloud: Windows Azure and Java http://channel9.msdn.com/Events/PDC/PDC10/CS10  AzureRunMe  http://azurerunme.codeplex.com/  Windows Azure SDK for Java http://www.interoperabilitybridges.com/projects/windows-azure-sdk-for-java  AppFabric SDK for Java http://www.interoperabilitybridges.com/projects/azure-java-sdk-for-net-services  Information Cards for Java http://www.interoperabilitybridges.com/projects/information-card-for-java  Apache Stonehenge http://www.interoperabilitybridges.com/projects/apache-stonehenge  Channel 9 Case Study on Java and Windows Azure http://www.microsoft.com/casestudies/Windows-Azure/Gigaspaces/Solution-Provider-Streamlines-Java-Application-Deployment-in-the-Cloud/400000000081   

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • OBIA on Teradata - Part 2 Teradata DB Utilization for ETL

    - by Mohan Ramanuja
    Techniques to Monitor Queries and ETL Load CPU and Disk I/OSelect username, processor, sum(cputime), sum(diskio) from dbc.ampusage where processor ='1-0' order by 2,3 descgroup by 1,2;UserName    Vproc    Sum(CpuTime)    Sum(DiskIO)AC00916        10    6.71            24975 List Hardware ErrorsThere is a possibility that the system might have adequate disk space but out of free cylinders. In order to monitor hardware errors, the following query was used:Select * from dbc.Software_Event_Log where Text like '%restart%' order by thedate, thetime;For active users, usage of CPU and analysis of bad CPU to I/O ratiosSelect * from DBC.AMPUSAGE where username='CRMSTGC_DEV_ID';  AND SUBSTR(ACCOUNTNAME,6,3)='006'; Usage By I/OSelect AccountName, UserName, sum(CpuTime), sum(DiskIO)  from DBC.AMPUSAGE group by AccountName, UserName Order by Sum(DiskIO) desc; AccountName                       UserName                          Sum(CpuTime)  Sum(DiskIO)$M1$10062209                      AB89487                           374628.612    7821847$M1$10062210                      AB89487                           186692.244    2799412$M1$10062213                      COC_ETL_ID                        119531.068    331100426$M1$10062200                      AB63472                           118973.316    109881984$M1$10062204                      AB63472                           110825.356    94666986$M1$10062201                      AB63472                           110797.976    75016994$M1$10062202                      AC06936                           100924.448    407839702$M1$10062204                      AB67963                           0         4$M1$10062207                      AB91990                           0         2$M1$10062208                      AB63461                           0         24$M1$10062211                      AB84332                           0         6$M1$10062214                      AB65484                           0         8$M1$10062205                      AB77529                           0         58$M1$10062210                      AC04768                           0         36$M1$10062206                      AB54940                           0         22 Usage By CPUSelect AccountName, UserName, sum(CpuTime), sum(DiskIO)  from DBC.AMPUSAGE group by AccountName, UserName Order by Sum(CpuTime) desc;AccountName                       UserName                          Sum(CpuTime)  Sum(DiskIO)$M1$10062209                      AB89487                           374628.612    7821847$M1$10062210                      AB89487                           186692.244    2799412$M1$10062213                      COC_ETL_ID                        119531.068    331100426$M1$10062200                      AB63472                           118973.316    109881984$M1$10062204                      AB63472                           110825.356    94666986$M1$10062201                      AB63472                           110797.976    75016994$M2$100622105813004760047LOAD     T23_ETLPROC_ENT                   0 6$M1$10062215                      AA37720                           0     180$M1$10062209                      AB81670                           0     6Select count(distinct vproc) from dbc.ampusage;432select * from dbc.dbcinfo;AccountName     UserName     CpuTime DiskIO  CpuTimeNorm         Vproc VprocType    Model$M1$10062205                      CRM_STGC_DEV_ID                   0.32    1764    12.7423999023438    0     AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.28    1730    11.1495999145508    3     AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.304    1736    12.1052799072266    4    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.248    1731    9.87535992431641    7    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.332    1731    13.2202398986816    8    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.284    1712    11.3088799133301    11   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.24    1757    9.55679992675781    12    AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.292    1737    11.6274399108887    15   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.268    1753    10.6717599182129    16   AMP      2580$M1$10062205                      CRM_STGC_DEV_ID                   0.276    1732    10.9903199157715    19   AMP      2580select * from dbc.dbcinfo;InfoKey    InfoDataLANGUAGE   SUPPORT           MODE    StandardRELEASE    12.00.03.03VERSION    12.00.03.01a

    Read the article

  • Understanding the Value of SOA

    - by Mala Narasimharajan
    Written By: Debra Lilley, ACE Director, Fusion Applications Again I want to talk from my area of expertise of Fusion Applications and talk about their design fundamentals. If you look at the table below and start at the bottom Oracle have defined all of the business objects e.g. accounts, people, customers, invoices etc. used by Fusion Applications; each of these objects contain all of the information required and can be expanded if necessary.  That Oracle have created for each of these business objects every action that is needed for the applications e.g. all the actions to create a new customer, checking to see if it exists, credit checking with D&B (Dun & Bradstreet < http://www.dnb.co.uk/> ) , creating the record, notifying those required etc. Each of these actions is a stand-alone web service. Again you can create a new actions or subscribe to an external provided web service e.g. the D&B check. The diagram also shows that all of development of Fusion Applications is from their Fusion Middleware offerings. Then the Intelligent Business Process is the order in which you run these actions, this is Service Orientated Architecture, SOA. Not only is SOA used to orchestrate actions within Fusion Applications it is also used in the integration of Fusion Applications with the rest of the Oracle stable of applications such as EBS, PeopleSoft, JDE and Siebel. The other applications are written with propriety development tools so how do they work with SOA? It’s a very simple answer, with the introduction of the Oracle SOA platform each process within these applications was made available to be called as a web service. I won’t go into technically how that is done but what’s known as a wrapper to allow each of them to act in this way was added. Finally at the top of the diagram are the questions that each Fusion Application process must answer, and this is the ‘special’ sauce that makes them so good, the User Experience, but that is a topic for another day, or you can read about it in my blog http://debrasoracle.blogspot.co.uk/2014/04/going-on-record-about-fusion-apps-cloud.html or Oracle’s own UX blog https://blogs.oracle.com/usableapps/ The concept behind AppAdvantage is not new the idea that Oracle technology can add value to your Oracle applications investments is pretty fundamental. Nishit Rao who is in AppAdvantage team provided myself and other ACE Directors with demo kits so that we could demonstrate SOA running with the applications. The example I learnt to build was that of the EBS inventory open interface. The simple concept is that request records can be added to a table and an import run that creates these as transactions in inventory. What’s SOA allows you to do is to add to the table from any source and then run this process automatically whereas traditionally you had to run the process at regular intervals because you didn’t know if the table was empty or not. This may just sound like a different way of doing the same thing but if the process is critical for your business then the interval was very small and the process run potentially many times unnecessarily. Using SOA it only happened when necessary without any delay. So in my post today I’ve talked about how SOA is used with Fusion Applications and in the linking with more traditional applications but that is only the tip of the iceberg of potential, your applications are just part of your IT systems and SOA can orchestrate your data across all of them; the beauty of open standards.  Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director.  Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts

    Read the article

  • Selenium &ndash; Use Data Driven tests to run in multiple browsers and sizes

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/04/selenium-ndash-use-data-driven-tests-to-run-in-multiple.aspxSelenium uses WebDriver (or is it the same? I’m still learning how it is connected) to run Automated UI tests in many different browsers. For example, you can run the same test in Chrome and Firefox and in a smaller sized Chrome browser. The permutations can grow quickly. One way to get them to run in MStest is to create  a test method for each test (ie ChromeDeleteItem_Small, ChromeDeleteItem_Large, FFDeleteItem_Small, FFDeleteItem_Large) that each call  the same base method, passing in the browser and size you’d like. This approach was causing a lot of duplicate code, so I decided to use the data driven approach, common to Coded UI or Unit test methods. 1. Create a class with a test method. 2. Create a csv with two columns: BrowserType, BrowserSize 3. Add rows for each permutation: Chrome, Large | Chrome, Small | Firefox, Large | Firefox, Small | IE, Large | IE, Small | *** 4. Add the csv to the Visual Studio Project. 5. Set the Copy to output directory to Copy always 6. Add the attribute: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] 7. Run the test in the test explorer Example:[CodedUITest] public class AllTasksTests : TasksTestBase { [TestMethod] [TestCategory("Tasks")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] public void CreateTask() { this.PrepForDataDrivenTest(); base.CreateTaskTest("New Task"); } } protected void PrepForDataDrivenTest() { var browserType = this.ParseBrowserType(Context.DataRow["BrowserType"].ToString()); var browserSize = this.ParseBrowserSize(Context.DataRow["BrowserSize"].ToString()); this.BrowserType = browserType; this.BrowserSize = browserSize; Trace.WriteLine("browser: " + browserType.ToString()); Trace.WriteLine("browser size: " + browserSize.ToString()); } /// <summary> /// Get the enum value from the string /// </summary> /// <param name="browserType">Chrome, Firefox, or IE</param> /// <returns>The browser type.</returns> private BrowserType ParseBrowserType(string browserType) { return (UITestFramework.BrowserType)Enum.Parse(typeof(UITestFramework.BrowserType), browserType, true); } /// <summary> /// Get the browser size enum value from the string /// </summary> /// <param name="browserSize">Small, Medium, Large</param> /// <returns>the browser size</returns> private BrowserSizeEnum ParseBrowserSize(string browserSize) { return (BrowserSizeEnum)Enum.Parse(typeof(BrowserSizeEnum), browserSize, true); }/// <summary> /// Change the browser to the size based on the enum. /// </summary> /// <param name="browserSize">The BrowserSizeEnum value to resize the window to.</param> private void ResizeBrowser(BrowserSizeEnum browserSize) { switch (browserSize) { case BrowserSizeEnum.Large: this.driver.Manage().Window.Maximize(); break; case BrowserSizeEnum.Medium: this.driver.Manage().Window.Size = new Size(800, this.driver.Manage().Window.Size.Height); break; case BrowserSizeEnum.Small: this.driver.Manage().Window.Size = new Size(500, this.driver.Manage().Window.Size.Height); break; default: break; } }/// <summary> /// Browser sizes for automation testing /// </summary> public enum BrowserSizeEnum { /// <summary> /// Large size, Maximized to the desktop /// </summary> Large, /// <summary> /// Similar to tablets /// </summary> Medium, /// <summary> /// Phone sizes... 610px and smaller /// </summary> Small } Hope it helps!

    Read the article

  • TFS 2012 API Create Alert Subscriptions

    - by Bob Hardister
    Originally posted on: http://geekswithblogs.net/BobHardister/archive/2013/07/24/tfs-2012-api-create-alert-subscriptions.aspxThere were only a few post on this and I felt like really important information was left out: What the defaults are How to create the filter string Here’s the code to create the subscription. Get the Collection public TfsTeamProjectCollection GetCollection(string collectionUrl) { try { //connect to the TFS collection using the active user TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri(collectionUrl)); tpc.EnsureAuthenticated(); return tpc; } catch (Exception) { return null; } } Use Impersonation Because my app is used to create “support tickets” as stories in TFS, I use impersonation so the subscription is setup for the “requester.”  That way I can take all the defaults for the subscription delivery preferences. public TfsTeamProjectCollection GetCollectionImpersonation(string collectionUrl, string impersonatingUserAccount) { // see: http://blogs.msdn.com/b/taylaf/archive/2009/12/04/introducing-tfs-impersonation.aspx try { TfsTeamProjectCollection tpc = GetCollection(collectionUrl); if (!(tpc == null)) { //get the TFS identity management service (v2 is 2012 only) IIdentityManagementService2 ims = tpc.GetService<IIdentityManagementService2>(); //look up the user we want to impersonate TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, impersonatingUserAccount, MembershipQuery.None, ReadIdentityOptions.None); //create a new connection using the impersonated user account //note: do not ensure authentication because the impersonated user may not have //windows authentication at execution if (!(identity == null)) { TfsTeamProjectCollection itpc = new TfsTeamProjectCollection(tpc.Uri, identity.Descriptor); return itpc; } else { //the user account is not found return null; } } else { return null; } } catch (Exception) { return null; } } Create the Alert Subscription public bool SetWiAlert(string collectionUrl, string projectName, int wiId, string emailAddress, string userAccount) { bool setSuccessful = false; try { //use impersonation so the event service creating the subscription will default to //the correct account: otherwise domain ambiguity could be a problem TfsTeamProjectCollection itpc = GetCollectionImpersonation(collectionUrl, userAccount); if (!(itpc == null)) { IEventService es = itpc.GetService(typeof(IEventService)) as IEventService; DeliveryPreference deliveryPreference = new DeliveryPreference(); //deliveryPreference.Address = emailAddress; deliveryPreference.Schedule = DeliverySchedule.Immediate; deliveryPreference.Type = DeliveryType.EmailHtml; //the following line does not work for two reasons: //string filter = string.Format("\"ID\" = '{0}' AND \"Authorized As\" <> '[Me]'", wiId); //1. the create fails because there is a space between Authorized As //2. the explicit query criteria are all incorrect anyway // see uncommented line for what does work: you have to create the subscription mannually // and then get it to view what the filter string needs to be (see following commented code) //this works string filter = string.Format("\"CoreFields/IntegerFields/Field[Name='ID']/NewValue\" = '12175'" + " AND \"CoreFields/StringFields/Field[Name='Authorized As']/NewValue\"" + " <> '@@MyDisplayName@@'", projectName, wiId); string eventName = string.Format("<PT N=\"ALM Ticket for Work Item {0}\"/>", wiId); es.SubscribeEvent("WorkItemChangedEvent", filter, deliveryPreference, eventName); ////use this code to get existing subscriptions: you can look at manually created ////subscriptions to see what the filter string needs to be //IIdentityManagementService2 ims = itpc.GetService<IIdentityManagementService2>(); //TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, // userAccount, // MembershipQuery.None, // ReadIdentityOptions.None); //var existingsubscriptions = es.GetEventSubscriptions(identity.Descriptor); setSuccessful = true; return setSuccessful; } else { return setSuccessful; } } catch (Exception) { return setSuccessful; } }

    Read the article

  • ?RAC??????(Rolling)??/????????

    - by JaneZhang(???)
       ?RAC??????????,???????,???????????(Rolling),????,???????,??????????,???????????,????????,???????????????,?????????????????,???????   ?????????????????Rolling???,???????????Rolling?,?????????? ????,???Rolling???????:1. ?????2. ?????,????????????3. ????????????????????4. ??????,????????????????5. ?????????Readme????????????:1). ?oracle???????????????????.2). ??????:3). ??1????ORACLE_HOME?????????+ASM??(???);4). ?1?????:$cd $ORACLE_HOME/OPatch/10082277$opatch apply5). ??opatch????????????,??????????:6). ??1????ORACLE_HOME?????????+ASM??(???);7). ??2????ORACLE_HOME?????????+ASM??(???);8). ?????????????,??????????;9). ??2????ORACLE_HOME?????????+ASM??(???);10).???????,????? ????10.2.0.4 RAC???(Rolling)????8575528???:1).?oracle???????????????????,??:$ORACLE_HOME/OPatch??.$ pwd/u01/app/oracle/OPatch$ lsdocs  emdpatch.pl  jlib  opatch  opatch.ini  opatch.pl  opatchprereqs  p8575528_10204_Linux-x86.zip2).??????:su - oracle$ unzip p8575528_10204_Linux-x86.zipArchive:  p8575528_10204_Linux-x86.zip  creating: 8575528/  creating: 8575528/files/  creating: 8575528/files/lib/  creating: 8575528/files/lib/libserver10.a/ inflating: 8575528/files/lib/libserver10.a/kks1.o inflating: 8575528/files/lib/libserver10.a/kksc.o inflating: 8575528/files/lib/libserver10.a/kksh.o inflating: 8575528/files/lib/libserver10.a/ksmp.o  creating: 8575528/etc/  creating: 8575528/etc/config/ inflating: 8575528/etc/config/inventory inflating: 8575528/etc/config/actions  creating: 8575528/etc/xml/ inflating: 8575528/etc/xml/GenericActions.xml inflating: 8575528/etc/xml/ShiphomeDirectoryStructure.xml inflating: 8575528/README.txt    $ ls8575528  docs  emdpatch.pl  jlib  opatch  opatch.ini  opatch.pl  opatchprereqs  p8575528_10204_Linux-x86.zip3).????????????RAC?????(rolling)?$ $ORACLE_HOME/OPatch/opatch query -all /u01/app/oracle/OPatch/8575528|grep rollingPatch is a rolling patch: true <=====??????4).??1??????ORACLE_HOME?????????(???ASM,????):$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>??:$srvctl stop instance -d ONEPIECE -i ONEPIECE1$srvctl stop asm -n nascds14$ crs_stat -tName           Type           Target    State     Host      ------------------------------------------------------------ora....E1.inst application    OFFLINE   OFFLINE            ora....SM1.asm application    OFFLINE   OFFLINE5). ?1?????:??:$su - oracle$cd /u01/app/oracle/OPatch/8575528$opatch applyInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_01-27-38AM.logApplySession applying interim patch '8575528' to OH '/u01/app/oracle'Running prerequisite checks...OPatch detected the node list and the local node from the inventory.  OPatch will patch the local system thenpropagate the patch to the remote nodes.This node is part of an Oracle Real Application Cluster.Remote nodes: 'nascds15'Local node: 'nascds14'Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.(Oracle Home = '/u01/app/oracle')Is the local system ready for patching? [y|n]y <======??yUser Responded with: YBacking up files and inventory (not for auto-rollback) for the Oracle HomeBacking up files affected by the patch '8575528' for restore. This might take a while...Backing up files affected by the patch '8575528' for rollback. This might take a while...Patching component oracle.rdbms, 10.2.0.4.0...Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kks1.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksc.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksh.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/ksmp.o"Running make for target ioracleApplySession adding interim patch '8575528' to inventoryVerifying the update...Inventory check OK: Patch ID 8575528 is registered in Oracle Home inventory with proper meta-data.Files check OK: Files from Patch ID 8575528 are present in Oracle Home.The local system has been patched.  You can restart Oracle instances on it.Patching in rolling mode.The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]6). ??opatch????????????????????????7). ??1???ASM ????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds14$srvctl start instance -d ONEPIECE -i ONEPIECE1$crs_stat -tora....E1.inst application    ONLINE    ONLINE    nascds14  ora....SM1.asm application    ONLINE    ONLINE    nascds148).??2???ASM????????:$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>$srvctl stop instance -d ONEPIECE -i ONEPIECE2$srvctl stop asm -n nascds15$crs_statora....E2.inst application    OFFLINE   OFFLINE            ora....SM2.asm application    OFFLINE   OFFLINE9). ?????????????,???????????Is the node ready for patching? [y|n] y <====??yUser Responded with: YUpdating nodes 'nascds15'  Apply-related files are:    FP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt"    DP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt"    MP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt"    RC = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remote_cmds.txt"Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt" withactual path.Propagating files to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt" withactual path.Propagating directories to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt" withactual path.Running command on remote node 'nascds15':cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle || echoREMOTE_MAKE_FAILED::>&2The node 'nascds15' has been patched.  You can restart Oracle instances on it.There were relinks on remote nodes.  Remember to check the binary size and timestamp on the nodes 'nascds15' .The following make commands were invoked on remote nodes:'cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle'OPatch succeeded.10). ??2???ASM????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds15$srvctl start instance -d ONEPIECE -i ONEPIECE211).??????????????????????????$ORACLE_HOME/OPatch/opatch lsinventory[oracle@nascds14 8575528]$ $ORACLE_HOME/OPatch/opatch lsinventoryInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_01-44-11AM.logLsinventory Output file location : /u01/app/oracle/cfgtoollogs/opatch/lsinv/lsinventory2012-06-13_01-44-11AM.txt--------------------------------------------------------------------------------Installed Top-level Products (2):Oracle Database 10g                                                  10.2.0.1.0Oracle Database 10g Release 2 Patch Set 3                            10.2.0.4.0There are 2 products installed in this Oracle Home.Interim patches (1) :Patch  8575528      : applied on Wed Jun 13 01:28:24 CST 2012<<<<<<<<<<<<<<<<<<<  Created on 17 Aug 2010, 07:56:36 hrs PST8PDT  Bugs fixed:    8575528Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded.Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded. ????10.2.0.4 RAC???(Rolling)????8575528???: 1).??1?????ORACLE_HOME?????????(???ASM,????):$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>??:$srvctl stop instance -d ONEPIECE -i ONEPIECE1$srvctl stop asm -n nascds14$crs_stat -tName           Type           Target    State     Host        ------------------------------------------------------------ora....E1.inst application    OFFLINE   OFFLINE              ora....SM1.asm application    OFFLINE   OFFLINE  2). ?1??????:??:$su - oracle$cd $ORACLE_HOME/OPatch/8575528$opatch rollback -id 8575528Invoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_18-22-10PM.logRollbackSession rolling back interim patch '8575528' from OH '/u01/app/oracle'Running prerequisite checks...OPatch detected the node list and the local node from the inventory.  OPatch will patch the local system thenpropagate the patch to the remote nodes.This node is part of an Oracle Real Application Cluster.Remote nodes: 'nascds15'Local node: 'nascds14'Please shut down Oracle instances running out of this ORACLE_HOME on all the nodes.(Oracle Home = '/u01/app/oracle')Are all the nodes ready for patching? [y|n]y <=========??yUser Responded with: YBacking up files affected by the patch '8575528' for restore. This might take a while...Patching component oracle.rdbms, 10.2.0.4.0...Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kks1.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksc.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/kksh.o"Updating archive file "/u01/app/oracle/lib/libserver10.a"  with "lib/libserver10.a/ksmp.o"Running make for target ioracleRollbackSession removing interim patch '8575528' from inventoryPatching in rolling mode.The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]3). ??opatch????????????????????????????4). ??1??ASM ????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds14$srvctl start instance -d ONEPIECE -i ONEPIECE1$crs_stat -tora....E1.inst application    ONLINE    ONLINE    nascds14    ora....SM1.asm application    ONLINE    ONLINE    nascds145).??2???ASM????????:$srvctl stop instance -d <dbname> -i <instance_name>$srvctl stop asm -n <nodename>$srvctl stop instance -d ONEPIECE -i ONEPIECE2$srvctl stop asm -n nascds15$crs_stat -tora....E2.inst application    OFFLINE   OFFLINE              ora....SM2.asm application    OFFLINE   OFFLINE  6). ??????????????,??????????The node 'nascds15' will be patched next.Please shutdown Oracle instances running out of this ORACLE_HOME on 'nascds15'.(Oracle Home = '/u01/app/oracle')Is the node ready for patching? [y|n]y <=========??yUser Responded with: YUpdating nodes 'nascds15'  Rollback-related files are:    FR = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_files.txt"    DR = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt"    FP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt"    MP = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt"    RC = "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remote_cmds.txt"Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/remove_dirs.txt" withactual path.Removing directories on remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_files.txt" withactual path.Propagating files to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/copy_dirs.txt" withactual path.Propagating directories to remote nodes...Instantiating the file "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt.instantiated"by replacing $ORACLE_HOME in "/u01/app/oracle/.patch_storage/8575528_Aug_17_2010_07_56_36/rac/make_cmds.txt" withactual path.Running command on remote node 'nascds15':cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle || echoREMOTE_MAKE_FAILED::>&2The node 'nascds15' has been patched.  You can restart Oracle instances on it.There were relinks on remote nodes.  Remember to check the binary size and timestamp on the nodes 'nascds15' .The following make commands were invoked on remote nodes:'cd /u01/app/oracle/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle'OPatch succeeded.7). ??2???ASM????????:$srvctl start asm -n <nodename>$srvctl start instance -d <dbname> -i <instance_name>??:$srvctl start asm -n nascds15$srvctl start instance -d ONEPIECE -i ONEPIECE28).??????????????????????????$ $ORACLE_HOME/OPatch/opatch lsinventoryInvoking OPatch 10.2.0.4.2Oracle Interim Patch Installer version 10.2.0.4.2Copyright (c) 2007, Oracle Corporation.  All rights reserved.Oracle Home       : /u01/app/oracleCentral Inventory : /home/oracle/oraInventory  from           : /etc/oraInst.locOPatch version    : 10.2.0.4.2OUI version       : 10.2.0.4.0OUI location      : /u01/app/oracle/ouiLog file location : /u01/app/oracle/cfgtoollogs/opatch/opatch2012-06-13_19-40-41PM.logLsinventory Output file location : /u01/app/oracle/cfgtoollogs/opatch/lsinv/lsinventory2012-06-13_19-40-41PM.txt--------------------------------------------------------------------------------Installed Top-level Products (2):Oracle Database 10g                                                  10.2.0.1.0Oracle Database 10g Release 2 Patch Set 3                            10.2.0.4.0There are 2 products installed in this Oracle Home.There are no Interim patches installed in this Oracle Home.Rac system comprising of multiple nodes Local node = nascds14 Remote node = nascds15--------------------------------------------------------------------------------OPatch succeeded.

    Read the article

  • Bitnami redmine error SVN

    - by Evgeniy
    I'm installing the Bitnami Redmine stack (redmine + subversion). Firstly I install configure and test it locally (Ubuntu 14.04 LTS). And everything is OK. I install Bitnami stack on server (Red Hat 4.4.7-4) and configure SVN. I commit files into SVN and connect project into Redmine with SVN repository, but when I try see it Rredmine displays 404 error. In the Redmine log file I see the following errors: Started GET "/redmine/projects/web-user-panel/repository" for 127.0.0.1 at 2014-04-24 11:34:20 +0300 Processing by RepositoriesController#show as HTML Parameters: {"id"=>"web-user-panel"} Current user: user (id=13) Error parsing svn output: #<REXML::ParseException: No close tag for /lists/list> /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/parsers/treeparser.rb:28:in `parse' /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/document.rb:245:in `build' /var/www/html/redmine/ruby/lib/ruby/1.9.1/rexml/document.rb:43:in `initialize' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini/rexml.rb:30:in `new' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini/rexml.rb:30:in `parse' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/xml_mini.rb:80:in `parse' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:313:in `parse_xml' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/subversion_adapter.rb:106:in `block in entries' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:258:in `call' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:258:in `block in shellout' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:255:in `popen' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:255:in `shellout' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/abstract_adapter.rb:212:in `shellout' /var/www/html/redmine/apps/redmine/htdocs/lib/redmine/scm/adapters/subversion_adapter.rb:100:in `entries' /var/www/html/redmine/apps/redmine/htdocs/app/models/repository.rb:198:in `scm_entries' /var/www/html/redmine/apps/redmine/htdocs/app/models/repository.rb:203:in `entries' /var/www/html/redmine/apps/redmine/htdocs/app/controllers/repositories_controller.rb:116:in `show' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/implicit_render.rb:4:in `send_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/base.rb:167:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rendering.rb:10:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/callbacks.rb:18:in `block in process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:491:in `_run__2883861927089110970__process_action__2542827355008294621__callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `__run_callback' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:385:in `_run_process_action_callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:81:in `run_callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/callbacks.rb:17:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rescue.rb:29:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/instrumentation.rb:30:in `block in process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications.rb:123:in `block in instrument' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications/instrumenter.rb:20:in `instrument' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/notifications.rb:123:in `instrument' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/instrumentation.rb:29:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/params_wrapper.rb:207:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/railties/controller_runtime.rb:18:in `process_action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/base.rb:121:in `process' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/abstract_controller/rendering.rb:45:in `process' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal.rb:203:in `dispatch' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal/rack_delegation.rb:14:in `dispatch' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_controller/metal.rb:246:in `block in action' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:73:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:73:in `dispatch' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:36:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:68:in `block in call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:56:in `each' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/journey-1.0.4/lib/journey/router.rb:56:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/routing/route_set.rb:608:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-openid-1.3.1/lib/rack/openid.rb:98:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/best_standards_support.rb:17:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/etag.rb:23:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/conditionalget.rb:25:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/head.rb:14:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/params_parser.rb:21:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/flash.rb:242:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/session/abstract/id.rb:210:in `context' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/session/abstract/id.rb:205:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/cookies.rb:341:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/query_cache.rb:64:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.17/lib/active_record/connection_adapters/abstract/connection_pool.rb:479:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/callbacks.rb:28:in `block in call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `_run__1805290955544829105__call__1486932417638469082__callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:405:in `__run_callback' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:385:in `_run_call_callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/callbacks.rb:81:in `run_callbacks' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/callbacks.rb:27:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/remote_ip.rb:31:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/debug_exceptions.rb:16:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/show_exceptions.rb:56:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:32:in `call_app' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:16:in `block in call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/tagged_logging.rb:22:in `tagged' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/rack/logger.rb:16:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/request_id.rb:22:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/methodoverride.rb:21:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/runtime.rb:17:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.17/lib/active_support/cache/strategy/local_cache.rb:72:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/lock.rb:15:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.17/lib/action_dispatch/middleware/static.rb:63:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:136:in `forward' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:245:in `fetch' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:185:in `lookup' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:66:in `call!' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-cache-1.2/lib/rack/cache/context.rb:51:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/engine.rb:484:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/application.rb:231:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/railties-3.2.17/lib/rails/railtie/configurable.rb:30:in `method_missing' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/builder.rb:134:in `call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:64:in `block in call' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:49:in `each' /var/www/html/redmine/apps/redmine/htdocs/vendor/bundle/ruby/1.9.1/gems/rack-1.4.5/lib/rack/urlmap.rb:49:in `call' /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/rack/thread_handler_extension.rb:74:in `process_request' /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler/thread_handler.rb:141:in `accept_and_process_next_request' /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler/thread_handler.rb:109:in `main_loop' /var/www/html/redmine/ruby/lib/ruby/gems/1.9.1/gems/passenger-4.0.40/lib/phusion_passenger/request_handler.rb:448:in `block (3 levels) in start_threads' ... No close tag for /lists/list Line: 4 Position: 93 Last 80 unconsumed characters: Output was: <?xml version="1.0" encoding="UTF-8"?> <lists> <list path="svn://127.0.0.1/voxysuser"> Rendered common/error.html.erb within layouts/base (0.1ms) Completed 404 Not Found in 69.1ms (Views: 15.1ms | ActiveRecord: 3.0ms) How can I resolve this problem? I googled it, but similar problem fixed should be fixed 3 years ago. I'm installing the latest Bitnami Redmine 2.5.1-1 stack. UPDATE Well, I found next way. If I use the http protocol it works fine, but I should remove access for svn by web. That's why I create virtual host on localhost and get info from svn use 127.0.0.1 IP. <VirtualHost 127.0.0.1:8000> <Location /repo> DAV svn SVNPath "PATH_TO_MY_REPOSITORY" </Location> And this it work good.

    Read the article

  • Fedora error log file

    - by user111196
    I am running a java application using this wrapper service yajsw. The problem it just stopped without any error in its logs file. So I was wondering will there be any system log file which will indicate the cause of it going down? Partial of the log file. Apr 6 00:12:20 localhost kernel: imklog 3.22.1, log source = /proc/kmsg started. Apr 6 00:12:20 localhost rsyslogd: [origin software="rsyslogd" swVersion="3.22.1" x-pid="2234" x-info="http://www.rsyslog.com"] (re)start Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys cpuset Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys cpu Apr 6 00:12:20 localhost kernel: Linux version 2.6.27.41-170.2.117.fc10.x86_64 ([email protected]) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #1 SMP Thu Dec 10 10:36:29 EST 2009 Apr 6 00:12:20 localhost kernel: Command line: ro root=UUID=722ebf87-437f-4634-9c68-a82d157fa948 rhgb quiet Apr 6 00:12:20 localhost kernel: KERNEL supported cpus: Apr 6 00:12:20 localhost kernel: Intel GenuineIntel Apr 6 00:12:20 localhost kernel: AMD AuthenticAMD Apr 6 00:12:20 localhost kernel: Centaur CentaurHauls Apr 6 00:12:20 localhost kernel: BIOS-provided physical RAM map: Apr 6 00:12:20 localhost kernel: BIOS-e820: 0000000000000000 - 00000000000a0000 (usable) Apr 6 00:12:20 localhost kernel: BIOS-e820: 0000000000100000 - 00000000cfb50000 (usable) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000cfb50000 - 00000000cfb66000 (reserved) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000cfb66000 - 00000000cfb85c00 (ACPI data) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000cfb85c00 - 00000000d0000000 (reserved) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000fe000000 - 0000000100000000 (reserved) Apr 6 00:12:20 localhost kernel: BIOS-e820: 0000000100000000 - 0000000330000000 (usable) Apr 6 00:12:20 localhost kernel: DMI 2.5 present. Apr 6 00:12:20 localhost kernel: last_pfn = 0x330000 max_arch_pfn = 0x3ffffffff Apr 6 00:12:20 localhost kernel: x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 Apr 6 00:12:20 localhost kernel: last_pfn = 0xcfb50 max_arch_pfn = 0x3ffffffff Apr 6 00:12:20 localhost kernel: init_memory_mapping Apr 6 00:12:20 localhost kernel: last_map_addr: cfb50000 end: cfb50000 Apr 6 00:12:20 localhost kernel: init_memory_mapping Apr 6 00:12:20 localhost kernel: last_map_addr: 330000000 end: 330000000 Apr 6 00:12:20 localhost kernel: RAMDISK: 37bfc000 - 37fef6c8 Apr 6 00:12:20 localhost kernel: ACPI: RSDP 000F21B0, 0024 (r2 DELL ) Apr 6 00:12:20 localhost kernel: ACPI: XSDT 000F224C, 0084 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: FACP CFB83524, 00F4 (r3 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: DSDT CFB66000, 4974 (r1 DELL PE_SC3 1 INTL 20050624) Apr 6 00:12:20 localhost kernel: ACPI: FACS CFB85C00, 0040 Apr 6 00:12:20 localhost kernel: ACPI: APIC CFB83078, 00B6 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: SPCR CFB83130, 0050 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: HPET CFB83184, 0038 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: MCFG CFB831C0, 003C (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: WD__ CFB83200, 0134 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: SLIC CFB83338, 0176 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: ERST CFB6AAF4, 0210 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: HEST CFB6AD04, 027C (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: BERT CFB6A974, 0030 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: EINJ CFB6A9A4, 0150 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: TCPA CFB834BC, 0064 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: No NUMA configuration found Apr 6 00:12:20 localhost kernel: Faking a node at 0000000000000000-0000000330000000 Apr 6 00:12:20 localhost kernel: Bootmem setup node 0 0000000000000000-0000000330000000 Apr 6 00:12:20 localhost kernel: NODE_DATA [0000000000015000 - 0000000000029fff] Apr 6 00:12:20 localhost kernel: bootmap [000000000002a000 - 000000000008ffff] pages 66 Apr 6 00:12:20 localhost kernel: (7 early reservations) ==> bootmem [0000000000 - 0330000000] Apr 6 00:12:20 localhost kernel: #0 [0000000000 - 0000001000] BIOS data page ==> [0000000000 - 0000001000] Apr 6 00:12:20 localhost kernel: #1 [0000006000 - 0000008000] TRAMPOLINE ==> [0000006000 - 0000008000] Apr 6 00:12:20 localhost kernel: #2 [0000200000 - 0000a310cc] TEXT DATA BSS ==> [0000200000 - 0000a310cc] Apr 6 00:12:20 localhost kernel: #3 [0037bfc000 - 0037fef6c8] RAMDISK ==> [0037bfc000 - 0037fef6c8] Apr 6 00:12:20 localhost kernel: #4 [000009f000 - 0000100000] BIOS reserved ==> [000009f000 - 0000100000] Apr 6 00:12:20 localhost kernel: #5 [0000008000 - 000000c000] PGTABLE ==> [0000008000 - 000000c000] Apr 6 00:12:20 localhost kernel: #6 [000000c000 - 0000015000] PGTABLE ==> [000000c000 - 0000015000] Apr 6 00:12:20 localhost kernel: found SMP MP-table at [ffff8800000fe710] 000fe710 Apr 6 00:12:20 localhost kernel: Zone PFN ranges: Apr 6 00:12:20 localhost kernel: DMA 0x00000000 -> 0x00001000 Apr 6 00:12:20 localhost kernel: DMA32 0x00001000 -> 0x00100000 Apr 6 00:12:20 localhost kernel: Normal 0x00100000 -> 0x00330000 Apr 6 00:12:20 localhost kernel: Movable zone start PFN for each node Apr 6 00:12:20 localhost kernel: early_node_map[3] active PFN ranges Apr 6 00:12:20 localhost kernel: 0: 0x00000000 -> 0x000000a0 Apr 6 00:12:20 localhost kernel: 0: 0x00000100 -> 0x000cfb50 Apr 6 00:12:20 localhost kernel: 0: 0x00100000 -> 0x00330000 Apr 6 00:12:20 localhost kernel: ACPI: PM-Timer IO Port: 0x808 Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x02] lapic_id[0x04] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x07] lapic_id[0x03] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1]) Apr 6 00:12:20 localhost kernel: ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0]) Apr 6 00:12:20 localhost kernel: IOAPIC[0]: apic_id 8, version 0, address 0xfec00000, GSI 0-23 Apr 6 00:12:20 localhost kernel: ACPI: IOAPIC (id[0x09] address[0xfec81000] gsi_base[64]) Apr 6 00:12:20 localhost kernel: IOAPIC[1]: apic_id 9, version 0, address 0xfec81000, GSI 64-87 Apr 6 00:12:20 localhost kernel: ACPI: IOAPIC (id[0x0a] address[0xfec84000] gsi_base[160]) Apr 6 00:12:20 localhost kernel: IOAPIC[2]: apic_id 10, version 0, address 0xfec84000, GSI 160-183 Apr 6 00:12:20 localhost kernel: ACPI: IOAPIC (id[0x0b] address[0xfec84800] gsi_base[224]) Apr 6 00:12:20 localhost kernel: IOAPIC[3]: apic_id 11, version 0, address 0xfec84800, GSI 224-247 Apr 6 00:12:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 6 00:12:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 6 00:12:20 localhost kernel: Setting APIC routing to flat Apr 6 00:12:20 localhost kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 6 00:12:20 localhost kernel: Using ACPI (MADT) for SMP configuration information Apr 6 00:12:20 localhost kernel: SMP: Allowing 8 CPUs, 0 hotplug CPUs Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000000a0000 - 0000000000100000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000cfb50000 - 00000000cfb66000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000cfb66000 - 00000000cfb85000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000cfb85000 - 00000000cfb86000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000cfb86000 - 00000000d0000000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000d0000000 - 00000000e0000000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000e0000000 - 00000000f0000000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000f0000000 - 00000000fe000000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000fe000000 - 0000000100000000 Apr 6 00:12:20 localhost kernel: Allocating PCI resources starting at d1000000 (gap: d0000000:10000000) Apr 6 00:12:20 localhost kernel: PERCPU: Allocating 65184 bytes of per cpu data Apr 6 00:12:20 localhost kernel: Built 1 zonelists in Zone order, mobility grouping on. Total pages: 3096524 Apr 6 00:12:20 localhost kernel: Policy zone: Normal Apr 6 00:12:20 localhost kernel: Kernel command line: ro root=UUID=722ebf87-437f-4634-9c68-a82d157fa948 rhgb quiet Apr 6 00:12:20 localhost kernel: Initializing CPU#0 Apr 6 00:12:20 localhost kernel: PID hash table entries: 4096 (order: 12, 32768 bytes) Apr 6 00:12:20 localhost kernel: Extended CMOS year: 2000 Apr 6 00:12:20 localhost kernel: TSC: PIT calibration confirmed by PMTIMER. Apr 6 00:12:20 localhost kernel: TSC: using PMTIMER calibration value Apr 6 00:12:20 localhost kernel: Detected 1994.992 MHz processor. Apr 6 00:12:20 localhost kernel: Console: colour VGA+ 80x25 Apr 6 00:12:20 localhost kernel: console [tty0] enabled Apr 6 00:12:20 localhost kernel: Checking aperture... Apr 6 00:12:20 localhost kernel: No AGP bridge found Apr 6 00:12:20 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 6 00:12:20 localhost kernel: Placing software IO TLB between 0x20000000 - 0x24000000 Apr 6 00:12:20 localhost kernel: Memory: 12324244k/13369344k available (3311k kernel code, 253484k reserved, 1844k data, 1296k init) Apr 6 00:12:20 localhost kernel: SLUB: Genslabs=13, HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Apr 6 00:12:20 localhost kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 3989.98 BogoMIPS (lpj=1994992) Apr 6 00:12:20 localhost kernel: Security Framework initialized Apr 6 00:12:20 localhost kernel: SELinux: Initializing. Apr 6 00:12:20 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes) Apr 6 00:12:20 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes) Apr 6 00:12:20 localhost kernel: Mount-cache hash table entries: 256 Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys ns Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys cpuacct Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys devices Apr 6 00:12:20 localhost kernel: CPU: L1 I cache: 32K, L1 D cache: 32K Apr 6 00:12:20 localhost kernel: CPU: L2 cache: 4096K Apr 6 00:12:20 localhost kernel: CPU 0/0 -> Node 0 Apr 6 00:12:20 localhost kernel: CPU: Physical Processor ID: 0 Apr 6 00:12:20 localhost kernel: CPU: Processor Core ID: 0 Apr 6 00:12:20 localhost kernel: CPU0: Thermal monitoring enabled (TM1) Apr 6 00:12:20 localhost kernel: using mwait in idle threads. Apr 6 00:12:20 localhost kernel: ACPI: Core revision 20080609 Apr 6 00:12:20 localhost kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 6 00:12:20 localhost kernel: CPU0: Intel(R) Xeon(R) CPU E5335 @ 2.00GHz stepping 07 Apr 6 00:12:20 localhost kernel: Using local APIC timer interrupts. Apr 6 00:12:20 localhost kernel: Detected 20.781 MHz APIC timer. Apr 6 00:12:20 localhost kernel: Booting processor 1/4 ip 6000 Apr 6 00:12:20 localhost kernel: Initializing CPU#1 Apr 6 00:12:20 localhost kernel: Calibrating delay using timer specific routine.. 3990.05 BogoMIPS (lpj=1995026) Apr 6 00:12:20 localhost kernel: CPU: L1 I cache: 32K, L1 D cache: 32K Apr 6 00:12:20 localhost kernel: CPU: L2 cache: 4096K Apr 6 00:12:20 localhost kernel: CPU 1/4 -> Node 0 Apr 6 00:12:20 localhost kernel: CPU: Physical Processor ID: 1 Apr 6 00:12:20 localhost kernel: CPU: Processor Core ID: 0 Apr 6 00:12:20 localhost kernel: CPU1: Thermal monitoring enabled (TM2) Apr 6 00:12:20 localhost kernel: x86 PAT enabled: cpu 1, old 0x7040600070406, new 0x7010600070106 Apr 6 00:12:20 localhost kernel: CPU1: Intel(R) Xeon(R) CPU E5335 @ 2.00GHz stepping 07 Apr 6 00:12:20 localhost kernel: checking TSC synchronization [CPU#0 -> CPU#1]: passed. Apr 6 00:12:20 localhost kernel: Booting processor 2/2 ip 6000 Apr 6 00:12:20 localhost kernel: Initializing CPU#2 Apr 6 00:12:20 localhost kernel: Calibrating delay using timer specific routine.. 3990.05 BogoMIPS (lpj=1995029)

    Read the article

  • How to tune down the Hyperic built-in postgresql database for a small setup

    - by Svish
    We are testing out Hyperic 4.5.1 in a quite small environment for now. Currently there are just 1-5 agents and there probably won't be any more than 10-15. When I run ps ax there are 20(!) postgres processes running. For a small setup like this, that can't be necessary, can it? I'm a software developer and don't have much experience with setting up servers and such though, so don't really know. Either way, what settings are appropriate for a small Hyperic setup like this? Current, default and untouched configuration file, hqdb/data/postgresql.conf: # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the server. # # Any option can also be given as a command line switch to the server, # e.g., 'postgres -c log_connections=on'. Some options can be changed at # run-time with the 'SET' SQL command. # # This file is read on server startup and when the server receives a # SIGHUP. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # settings, which are marked below, require a server shutdown and restart # to take effect. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units: ms = milliseconds s = seconds min = minutes h = hours d = days #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 9432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # octal # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security & Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' # (change requires restart) #krb_srvname = 'postgres' # (change requires restart) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart) #krb_caseins_users = off # (change requires restart) # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 64MB # min 128kB or max_connections*16kB # (change requires restart) #temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 2MB # min 64kB maintenance_work_mem = 32MB # min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200ms # 10-10000ms between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 64kB # min 32kB # (change requires restart) commit_delay = 100000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_segments = 10 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h #checkpoint_warning = 30s # 0 is off # - Archiving - #archive_command = '' # command to use to archive a logfile segment #archive_timeout = 0 # force a logfile segment switch after this # many seconds; 0 is off #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - #seq_page_cost = 1.0 # measured on an arbitrary scale #random_page_cost = 4.0 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #effective_cache_size = 128MB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log # files # (change requires restart) # These are only used if redirect_stderr is on: log_directory = '../../logs' # Directory where log files are written # Can be absolute or relative to PGDATA log_filename = 'hqdb-%Y-%m-%d.log' # Log file name pattern. # Can include strftime() escapes #log_truncate_on_rotation = off # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 to # disable. #log_rotation_size = 10MB # Automatic rotation of logfiles will # happen after that much log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = error # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # fatal # panic (effectively off) log_min_duration_statement = 10000 # -1 is disabled, 0 logs all statements # and their durations. #silent_mode = off # DO NOT USE without syslog or # redirect_stderr # (change requires restart) # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off #log_connections = off #log_disconnections = off #log_duration = off #log_line_prefix = '' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, ddl, mod, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Query/Index Statistics Collector - #stats_command_string = on #update_process_title = on stats_start_collector = on # needed for block or row stats # (change requires restart) stats_block_level = on stats_row_level = on stats_reset_on_server_start = off # (change requires restart) # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 500 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 250 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before # vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '"$user",public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled #vacuum_freeze_min_age = 100000000 # - Locale and Formatting - datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #timezone_abbreviations = 'Default' # select the set of available timezone # abbreviations. Currently, there are # Default # Australia # India # However you can also create your own # file in share/timezonesets/. #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message # strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #local_preload_libraries = '' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1s #max_locks_per_transaction = 64 # min 10 # (change requires restart) # Note: each lock table slot uses ~270 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off #escape_string_warning = on #standard_conforming_strings = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names SELECT * FROM pg_stat_activity; datid | datname | procpid | usesysid | usename | current_query | waiting | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------+------------- 16384 | hqdb | 3267 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.036781+01 | 2011-02-08 15:51:20.02413+01 | 127.0.0.1 | 47892 16384 | hqdb | 3268 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.050994+01 | 2011-02-08 15:51:20.047393+01 | 127.0.0.1 | 47893 16384 | hqdb | 3269 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.056661+01 | 2011-02-08 15:51:20.053201+01 | 127.0.0.1 | 47894 16384 | hqdb | 3271 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.062351+01 | 2011-02-08 15:51:20.058822+01 | 127.0.0.1 | 47895 16384 | hqdb | 3272 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.068328+01 | 2011-02-08 15:51:20.064517+01 | 127.0.0.1 | 47896 16384 | hqdb | 3273 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.07444+01 | 2011-02-08 15:51:20.070755+01 | 127.0.0.1 | 47897 16384 | hqdb | 3274 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.080941+01 | 2011-02-08 15:51:20.076983+01 | 127.0.0.1 | 47898 16384 | hqdb | 3275 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.08741+01 | 2011-02-08 15:51:20.083697+01 | 127.0.0.1 | 47899 16384 | hqdb | 3276 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.093597+01 | 2011-02-08 15:51:20.089977+01 | 127.0.0.1 | 47900 16384 | hqdb | 3277 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:20.133974+01 | 2011-02-08 15:51:20.096149+01 | 127.0.0.1 | 47901 16384 | hqdb | 3308 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:49:27.402197+01 | 2011-02-08 15:51:29.826321+01 | 127.0.0.1 | 47902 16384 | hqdb | 3309 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.572395+01 | 2011-02-08 15:51:29.865243+01 | 127.0.0.1 | 47903 16384 | hqdb | 3310 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.586273+01 | 2011-02-08 15:51:29.874346+01 | 127.0.0.1 | 47904 16384 | hqdb | 3311 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:03.024088+01 | 2011-02-08 15:51:29.883598+01 | 127.0.0.1 | 47905 16384 | hqdb | 3312 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:35.804457+01 | 2011-02-08 15:51:29.892925+01 | 127.0.0.1 | 47906 16384 | hqdb | 3418 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.580207+01 | 2011-02-08 15:51:55.56911+01 | 127.0.0.1 | 47910 16384 | hqdb | 3419 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.59781+01 | 2011-02-08 15:51:55.588609+01 | 127.0.0.1 | 47911 16384 | hqdb | 3422 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:02.668836+01 | 2011-02-08 15:51:55.603076+01 | 127.0.0.1 | 47914 16384 | hqdb | 3421 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.770427+01 | 2011-02-08 15:51:55.603086+01 | 127.0.0.1 | 47913 16384 | hqdb | 3420 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.680785+01 | 2011-02-08 15:51:55.637058+01 | 127.0.0.1 | 47912 16384 | hqdb | 18233 | 10 | hqadmin | SELECT * FROM pg_stat_activity; | f | 2011-02-09 10:49:29.688949+01 | 2011-02-09 10:48:13.031475+01 | | -1 (21 rows)

    Read the article

  • yum update failed

    - by Nemanja Djuric
    I have problem doint yum update on my OpenVZ VPS i get this error message : (56/69): glibc-devel-2.5-81.el5_8.7.x86_64.rpm | 2.4 MB 00:00 (57/69): libstdc++-devel-4.1.2-52.el5_8.1.x86_64.rpm | 2.8 MB 00:00 (58/69): binutils-2.17.50.0.6-20.el5_8.3.x86_64.rpm | 2.9 MB 00:00 (59/69): cpp-4.1.2-52.el5_8.1.x86_64.rpm | 2.9 MB 00:00 (60/69): device-mapper-multipath-0.4.7-48.el5_8.1.x86_64 | 3.0 MB 00:00 (61/69): mysql-5.1.58-jason.1.x86_64.rpm | 3.5 MB 00:03 (62/69): coreutils-5.97-34.el5_8.1.x86_64.rpm | 3.6 MB 00:00 (63/69): gcc-c++-4.1.2-52.el5_8.1.x86_64.rpm | 3.8 MB 00:00 (64/69): glibc-2.5-81.el5_8.7.x86_64.rpm | 4.8 MB 00:01 (65/69): gcc-4.1.2-52.el5_8.1.x86_64.rpm | 5.3 MB 00:01 (66/69): glibc-2.5-81.el5_8.7.i686.rpm | 5.4 MB 00:01 (67/69): python-libs-2.4.3-46.el5_8.2.x86_64.rpm | 5.9 MB 00:01 (68/69): mysql-server-5.1.58-jason.1.x86_64.rpm | 13 MB 00:07 (69/69): glibc-common-2.5-81.el5_8.7.x86_64.rpm | 16 MB 00:03 -------------------------------------------------------------------------------- Total 2.4 MB/s | 106 MB 00:44 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /etc/my.cnf from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/bin/mysqlaccess from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /etc/my.cnf from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysql_find_rows from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysqlaccess from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-5.1.58- jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 Error Summary Thank you for help, Best regards, Nemanja

    Read the article

  • Coldfusion on VPS, how much JVM heap memory?

    - by Steven Filipowicz
    Recently I got a VPS server and I'm running Coldfusion, the website was running fine until it got more and more traffic and I started to encounter 'OutOfMemory' exceptions. I thought simply to rise the memory of the VPS server, but this didn't help. After doing some Google searches I found a setting in de CF Admin settings to set the JVM Heap memory. It was on the standard: Max Heap size 512MB and Min Heap size was empty. After playing around a bit I have now set it to Min 50MB and Max 200MB, good things is that I'm not getting the 'OutOfMemory' exceptions anymore. So far so good! But with about 50 active visitors on the website, the website starts to get slow. The CPU usage is only about 8% (Windows Taskmanager), also the taskmanager show only about 30% of the 3GB RAM in use. So I'm thinking that my values could be tweaked to use more of the RAM. Honestly I don't understand these JVM Memory heap settings, so I have no clue what is a good setting for me. I found a CF script that displays the memory usage, the details are: Heap Memory Usage - Committed 194 MB Heap Memory Usage - Initial 50.0 MB Heap Memory Usage - Max 194 MB Heap Memory Usage - Used 163 MB JVM - Free Memory 31.2 MB JVM - Max Memory 194 MB JVM - Total Memory 194 MB JVM - Used Memory 163 MB Memory Pool - Code Cache - Used 13.0 MB Memory Pool - PS Eden Space - Used 6.75 MB Memory Pool - PS Old Gen - Used 155 MB Memory Pool - PS Perm Gen - Used 64.2 MB Memory Pool - PS Survivor Space - Used 1.07 MB Non-Heap Memory Usage - Committed 77.4 MB Non-Heap Memory Usage - Initial 18.3 MB Non-Heap Memory Usage - Max 240 MB Non-Heap Memory Usage - Used 77.2 MB Free Allocated Memory: 30mb Total Memory Allocated: 194mb Max Memory Available to JVM: 194mb % of Free Allocated Memory: 16% % of Available Memory Allocated: 100% My JVM arguments are: -server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC - Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib Can I give the JVM more memory? If so, what settings should I use? Thanks very much!!

    Read the article

  • How to diagnose frequent segfaults

    - by Andreas Gohr
    My server is logging frequent segmentation faults to /var/log/kern.log in different tools. So far I've seen them in Perl, PHP and rsync. All installed software is up-to-date Debian packages. Here's an exerpt from the log file: Mar 2 01:07:54 gaz kernel: [ 5316.246303] imapsync[4533]: segfault at 8b ip 00007fb448c98fe6 sp 00007ffff571dd68 error 4 in libperl.so.5.10.1[7fb448bd7000+164000] Mar 2 01:17:42 gaz kernel: [ 5904.354307] php5-cgi[4441]: segfault at 2bb3dc8 ip 0000000002bb3dc8 sp 00007fffbeeaae48 error 15 Mar 2 02:54:05 gaz kernel: [11687.922316] php5-cgi[4495]: segfault at 2d7acf9 ip 0000000002d7acf9 sp 00007fff60c6eb18 error 15 Mar 2 10:50:08 gaz kernel: [40250.390322] BUG: unable to handle kernel paging request at 00000000024b03f0 Mar 2 10:50:08 gaz kernel: [40250.390341] IP: [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390353] PGD 208c71067 PUD 21c811067 PMD 209329067 PTE 8000000211c88067 Mar 2 10:50:08 gaz kernel: [40250.390365] Oops: 0011 [#1] SMP Mar 2 10:50:08 gaz kernel: [40250.390373] last sysfs file: /sys/devices/pci0000:00/0000:00:12.0/host4/target4:0:0/4:0:0:0/block/sdb/stat Mar 2 10:50:08 gaz kernel: [40250.390386] CPU 1 Mar 2 10:50:08 gaz kernel: [40250.390392] Modules linked in: cpufreq_userspace cpufreq_stats cpufreq_powersave cpufreq_conservative xt_recent xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ ipv4 ip6table_filter ip6_tables xt_DSCP xt_TCPMSS ipt_LOG ipt_REJECT iptable_mangle iptable_filter xt_multiport xt_state xt_limit xt_conntrack nf_conntrack_ftp nf_conntrack ip_tables x_tables loop snd _hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm snd drm_kms_helper soundcore drm snd_page_alloc i2c_algo_bit shpchp i2c_piix4 edac_core pcspkr k8temp evdev edac_m ce_amd pci_hotplug i2c_core button ext3 jbd mbcache dm_mod powernow_k8 aacraid 3w_9xxx 3w_xxxx raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 md_mod sata_nv sata_sil sata_via sd_mod crc_t10dif ata_generic ahci pata_atiixp ohci_hcd libata r8169 mii thermal ehci_hcd processor thermal_sys scsi_mod usbcore nls_base [last unloaded: scsi_wait_scan] Mar 2 10:50:08 gaz kernel: [40250.390566] Pid: 11482, comm: munin-limits Not tainted 2.6.32-5-amd64 #1 MS-7368 Mar 2 10:50:08 gaz kernel: [40250.390576] RIP: 0010:[<00000000024b03f0>] [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390586] RSP: 0018:ffff88021cc8dec0 EFLAGS: 00010286 Mar 2 10:50:08 gaz kernel: [40250.390593] RAX: 000000001ddc1000 RBX: 0000000000000010 RCX: ffffffff810f9904 Mar 2 10:50:08 gaz kernel: [40250.390600] RDX: 0000000000000000 RSI: ffffea0007688200 RDI: 0000000000000286 Mar 2 10:50:08 gaz kernel: [40250.390608] RBP: 00000000ffffffea R08: 0000000000000025 R09: 7865542f30312e35 Mar 2 10:50:08 gaz kernel: [40250.390615] R10: 000000d01cc8ddf8 R11: 0000000000000246 R12: ffff88021cc8def8 Mar 2 10:50:08 gaz kernel: [40250.390622] R13: 0000000002295010 R14: 00000000022c9db0 R15: 0000000002488d78 Mar 2 10:50:08 gaz kernel: [40250.390630] FS: 00007f3b3c8b2700(0000) GS:ffff880008d00000(0000) knlGS:0000000000000000 Mar 2 10:50:08 gaz kernel: [40250.390641] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 2 10:50:08 gaz kernel: [40250.390648] CR2: 00000000024b03f0 CR3: 000000021c5d1000 CR4: 00000000000006e0 Mar 2 10:50:08 gaz kernel: [40250.390656] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 2 10:50:08 gaz kernel: [40250.390663] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 2 10:50:08 gaz kernel: [40250.390671] Process munin-limits (pid: 11482, threadinfo ffff88021cc8c000, task ffff88021bf59530) Mar 2 10:50:08 gaz kernel: [40250.390681] Stack: Mar 2 10:50:08 gaz kernel: [40250.390687] ffffffff810f1d4a ffff880208c63228 0000000000000000 00007fffc2dcecc0 Mar 2 10:50:08 gaz kernel: [40250.390697] <0> 00000000024ba2b0 0000000002295010 ffffffff810f1e3d 0000000000000004 Mar 2 10:50:08 gaz kernel: [40250.390712] <0> ffff88021bf59530 ffff88021c4edc00 ffffffff812fe0b6 ffff88021c4edc60 Mar 2 10:50:08 gaz kernel: [40250.390732] Call Trace: Mar 2 10:50:08 gaz kernel: [40250.390742] [<ffffffff810f1d4a>] ? vfs_fstatat+0x2c/0x57 Mar 2 10:50:08 gaz kernel: [40250.390750] [<ffffffff810f1e3d>] ? sys_newstat+0x11/0x30 Mar 2 10:50:08 gaz kernel: [40250.390760] [<ffffffff812fe0b6>] ? do_page_fault+0x2e0/0x2fc Mar 2 10:50:08 gaz kernel: [40250.390768] [<ffffffff812fbf55>] ? page_fault+0x25/0x30 Mar 2 10:50:08 gaz kernel: [40250.390777] [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b Mar 2 10:50:08 gaz kernel: [40250.390783] Code: Bad RIP value. Mar 2 10:50:08 gaz kernel: [40250.390791] RIP [<00000000024b03f0>] 0x24b03f0 Mar 2 10:50:08 gaz kernel: [40250.390799] RSP <ffff88021cc8dec0> Mar 2 10:50:08 gaz kernel: [40250.390805] CR2: 00000000024b03f0 Mar 2 10:50:08 gaz kernel: [40250.391051] ---[ end trace 1cc1473b539c7f6e ]--- Mar 2 11:42:20 gaz kernel: [43382.242301] php5-cgi[10963]: segfault at d81160 ip 0000000000d81160 sp 00007fff3adcb058 error 15 Mar 2 21:51:14 gaz kernel: [79916.418302] php5-cgi[20089]: segfault at 1c59dc8 ip 0000000001c59dc8 sp 00007fff9b877fb8 error 15 Mar 3 03:45:01 gaz kernel: [101143.334305] munin-update[22519] general protection ip:7f516dce204c sp:7fff6049a978 error:0 in libperl.so.5.10.1[7f516dc7d000+164000] Mar 3 11:22:37 gaz kernel: [128599.570307] php5-cgi[22888]: segfault at 36485a8 ip 00000000036485a8 sp 00007fff2d56e1c8 error 15 Mar 4 08:32:17 gaz kernel: [204779.842304] php5-cgi[22090]: segfault at 18 ip 0000000000689e5e sp 00007fff677a6a48 error 6 in php5-cgi[400000+6f9000] Mar 4 10:01:02 gaz kernel: [210104.434706] rsync[22236] general protection ip:7f14a07137f9 sp:7fff88f940b8 error:0 in libc-2.11.2.so[7f14a069d000+158000] Mar 4 11:32:22 gaz kernel: [215584.262316] BUG: unable to handle kernel paging request at 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.262331] IP: [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262343] PGD 0 Mar 4 11:32:22 gaz kernel: [215584.262350] Oops: 0010 [#2] SMP Mar 4 11:32:22 gaz kernel: [215584.262359] last sysfs file: /sys/devices/pci0000:00/0000:00:12.0/host4/target4:0:0/4:0:0:0/block/sdb/stat Mar 4 11:32:22 gaz kernel: [215584.262371] CPU 1 Mar 4 11:32:22 gaz kernel: [215584.262378] Modules linked in: cpufreq_userspace cpufreq_stats cpufreq_powersave cpufreq_conservative xt_recent xt_tcpudp iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 ip6table_filter ip6_tables xt_DSCP xt_TCPMSS ipt_LOG ipt_REJECT iptable_mangle iptable_filter xt_multiport xt_state xt_limit xt_conntrack nf_conntrack_ftp nf_conntrack ip_tables x_tables loop snd_hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm snd drm_kms_helper soundcore drm snd_page_alloc i2c_algo_bit shpchp i2c_piix4 edac_core pcspkr k8temp evdev edac_mce_amd pci_hotplug i2c_core button ext3 jbd mbcache dm_mod powernow_k8 aacraid 3w_9xxx 3w_xxxx raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 md_mod sata_nv sata_sil sata_via sd_mod crc_t10dif ata_generic ahci pata_atiixp ohci_hcd libata r8169 mii thermal ehci_hcd processor thermal_sys scsi_mod usbcore nls_base [last unloaded: scsi_wait_scan] Mar 4 11:32:22 gaz kernel: [215584.262552] Pid: 1960, comm: proxymap Tainted: G D 2.6.32-5-amd64 #1 MS-7368 Mar 4 11:32:22 gaz kernel: [215584.262563] RIP: 0010:[<00000000ffffff9c>] [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262573] RSP: 0018:ffff880209257e00 EFLAGS: 00010212 Mar 4 11:32:22 gaz kernel: [215584.262580] RAX: ffff8801514eb780 RBX: ffffffff810efb2d RCX: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262590] RDX: 0000000000000020 RSI: 0000000000000001 RDI: ffff8801514eb780 Mar 4 11:32:22 gaz kernel: [215584.262600] RBP: 00000000ffffffe9 R08: 0000000000000000 R09: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262611] R10: ffff880209257e78 R11: ffffffff81152c7c R12: 0000000000000001 Mar 4 11:32:22 gaz kernel: [215584.262622] R13: 0000000000008001 R14: 0000000000000024 R15: 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.262633] FS: 00007fca4de35700(0000) GS:ffff880008d00000(0000) knlGS:0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262644] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 4 11:32:22 gaz kernel: [215584.262650] CR2: 00000000ffffff9c CR3: 00000001c9cbb000 CR4: 00000000000006e0 Mar 4 11:32:22 gaz kernel: [215584.262661] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 4 11:32:22 gaz kernel: [215584.262671] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Mar 4 11:32:22 gaz kernel: [215584.262682] Process proxymap (pid: 1960, threadinfo ffff880209256000, task ffff88021c4b1c40) Mar 4 11:32:22 gaz kernel: [215584.262693] Stack: Mar 4 11:32:22 gaz kernel: [215584.262698] ffffffff810f8566 ffff880209257e78 ffff88021c7bf000 ffff88021c7bf0c8 Mar 4 11:32:22 gaz kernel: [215584.262709] <0> 0000800000000000 ffff88021fc0f000 ffff880209257e78 00000000fffffffe Mar 4 11:32:22 gaz kernel: [215584.262724] <0> ffffffff810e5881 ffff880209257f48 0000000000000286 ffff88021fc0f000 Mar 4 11:32:22 gaz kernel: [215584.262743] Call Trace: Mar 4 11:32:22 gaz kernel: [215584.262753] [<ffffffff810f8566>] ? do_filp_open+0xa7/0x94b Mar 4 11:32:22 gaz kernel: [215584.262763] [<ffffffff810e5881>] ? virt_to_head_page+0x9/0x2a Mar 4 11:32:22 gaz kernel: [215584.262771] [<ffffffff810f9904>] ? user_path_at+0x52/0x79 Mar 4 11:32:22 gaz kernel: [215584.262779] [<ffffffff810cfec1>] ? get_unmapped_area+0xd7/0x139 Mar 4 11:32:22 gaz kernel: [215584.262787] [<ffffffff811019d5>] ? alloc_fd+0x67/0x10c Mar 4 11:32:22 gaz kernel: [215584.262795] [<ffffffff810eceaf>] ? do_sys_open+0x55/0xfc Mar 4 11:32:22 gaz kernel: [215584.262804] [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b Mar 4 11:32:22 gaz kernel: [215584.262811] Code: Bad RIP value. Mar 4 11:32:22 gaz kernel: [215584.262819] RIP [<00000000ffffff9c>] 0xffffff9c Mar 4 11:32:22 gaz kernel: [215584.262828] RSP <ffff880209257e00> Mar 4 11:32:22 gaz kernel: [215584.262833] CR2: 00000000ffffff9c Mar 4 11:32:22 gaz kernel: [215584.263077] ---[ end trace 1cc1473b539c7f6f ]--- As you can see there are segfaults, a general protection fault and a Kernel Oops. My first guess was that there's a Hardware problem of some sort and I asked my Hoster (it's a rented root server) to do a full hardwarecheck - they did, but couldn't find any problem. I don't know what and how they checked but their support team is usually quite good. I ran memtester and cpuburn myself and couldn't find any error either. Unfortunately I have no reliable way to reproduce these segfaults, they seem to be more or less random. On a hunch I disabled the firewall of the system and ran one of the programs that segfaulted regularily (imapsync) and it seemed to take longer to segfault than before, so the problem might be related to the network stack. Or could just be a random thing. Here are the kernel specs: # uname -a Linux gaz 2.6.32-5-amd64 #1 SMP Wed Jan 12 03:40:32 UTC 2011 x86_64 GNU/Linux # cat /etc/debian_version 6.0 # lsmod Module Size Used by cpufreq_userspace 1992 0 cpufreq_stats 2659 0 cpufreq_powersave 902 0 cpufreq_conservative 5162 0 xt_recent 5977 0 xt_tcpudp 2319 0 iptable_nat 4299 0 nf_nat 13388 1 iptable_nat nf_conntrack_ipv4 9833 3 iptable_nat,nf_nat nf_defrag_ipv4 1139 1 nf_conntrack_ipv4 ip6table_filter 2384 0 ip6_tables 15075 1 ip6table_filter xt_DSCP 1995 0 xt_TCPMSS 2919 0 ipt_LOG 4518 0 ipt_REJECT 1953 0 iptable_mangle 2817 0 iptable_filter 2258 0 xt_multiport 2267 0 xt_state 1303 0 xt_limit 1782 0 xt_conntrack 2407 0 nf_conntrack_ftp 5537 0 nf_conntrack 46535 6 iptable_nat,nf_nat,nf_conntrack_ipv4,xt_state,xt_conntrack,nf_conntrack_ftp ip_tables 13899 3 iptable_nat,iptable_mangle,iptable_filter x_tables 12845 13 xt_recent,xt_tcpudp,iptable_nat,ip6_tables,xt_DSCP,xt_TCPMSS,ipt_LOG,ipt_REJECT,xt_multiport,xt_state,xt_limit,xt_conntrack,ip_tables loop 11799 0 radeon 573996 0 ttm 39986 1 radeon drm_kms_helper 20065 1 radeon snd_hda_codec_atihdmi 2251 1 drm 142359 3 radeon,ttm,drm_kms_helper snd_hda_intel 20019 0 i2c_algo_bit 4225 1 radeon pcspkr 1699 0 i2c_piix4 8328 0 snd_hda_codec 54244 2 snd_hda_codec_atihdmi,snd_hda_intel i2c_core 15712 5 radeon,drm_kms_helper,drm,i2c_algo_bit,i2c_piix4 snd_hwdep 5380 1 snd_hda_codec snd_pcm 60503 2 snd_hda_intel,snd_hda_codec snd_timer 15582 1 snd_pcm snd 46446 5 snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_timer soundcore 4598 1 snd evdev 7352 3 snd_page_alloc 6249 2 snd_hda_intel,snd_pcm k8temp 3283 0 edac_core 29261 0 edac_mce_amd 6433 0 shpchp 26264 0 pci_hotplug 21203 1 shpchp button 4650 0 ext3 106518 2 jbd 37085 1 ext3 mbcache 5050 1 ext3 dm_mod 53754 0 powernow_k8 10978 1 aacraid 59779 0 3w_9xxx 28684 0 3w_xxxx 20569 0 raid10 17809 0 raid456 44500 0 async_raid6_recov 5170 1 raid456 async_pq 3479 2 raid456,async_raid6_recov raid6_pq 77179 2 async_raid6_recov,async_pq async_xor 2478 3 raid456,async_raid6_recov,async_pq xor 4380 1 async_xor async_memcpy 1198 2 raid456,async_raid6_recov async_tx 1734 5 raid456,async_raid6_recov,async_pq,async_xor,async_memcpy raid1 18431 3 raid0 5517 0 md_mod 73824 7 raid10,raid456,raid1,raid0 sata_nv 19166 0 sata_sil 7412 0 sata_via 7928 0 sd_mod 29889 8 crc_t10dif 1276 1 sd_mod ata_generic 3047 0 ahci 32374 6 r8169 29229 0 mii 3210 1 r8169 thermal 11674 0 pata_atiixp 3489 0 libata 133632 6 sata_nv,sata_sil,sata_via,ata_generic,ahci,pata_atiixp ohci_hcd 19212 0 ehci_hcd 31151 0 processor 29935 1 powernow_k8 thermal_sys 11942 2 thermal,processor scsi_mod 122149 5 aacraid,3w_9xxx,3w_xxxx,sd_mod,libata usbcore 122034 3 ohci_hcd,ehci_hcd nls_base 6377 1 usbcore # free total used free shared buffers cached Mem: 8166128 1228036 6938092 0 140412 782060 -/+ buffers/cache: 305564 7860564 Swap: 2102456 0 2102456 So, basically my questions are: How can I diagnose this further? Is there any data in the log above that could help me to isolate the troublemaker? Are there any known problems with the above hardware/software I overlooked when googling for it? Is there a way to prevent the kernel from autoloading modules (I probably don't need all these modules and one of them might be the culprit)

    Read the article

  • Finding out why Dell Controler is Degraded

    - by Kyle Brandt
    I installed open manage on a couple of my PE 2950s for snmp monitoring of the RAID. All the checks seem to come back okay except for controllerState: [root@aMachine ~]# snmpwalk -v 2c -c bestNotToPostPasswords myMachine -m +StorageManagement-MIB controllerstate StorageManagement-MIB::controllerState.1 = INTEGER: degraded(6) Other checks seems to indicate the battery, LD, and physicals disks are all good unless I missing something. Can anyone tell if I am missing something or neglecting something import in my RAID monitoring/understanding? I get degraded for both these servers I have set up. A walk of the entire storage management tree for on of them: StorageManagement-MIB::softwareVersion.0 = STRING: "3.2.0" StorageManagement-MIB::globalStatus.0 = INTEGER: warning(2) StorageManagement-MIB::softwareManufacturer.0 = STRING: "Dell Inc." StorageManagement-MIB::softwareProduct.0 = STRING: "Server Administrator (Storage Management)" StorageManagement-MIB::softwareDescription.0 = STRING: "Configuration and monitoring of disk storage devices." StorageManagement-MIB::displayName.0 = STRING: "Server Administrator (Storage Management)" StorageManagement-MIB::description.0 = STRING: "Configuration and monitoring of disk storage devices." StorageManagement-MIB::agentVendor.0 = STRING: "Dell Inc." StorageManagement-MIB::agentTimeStamp.0 = INTEGER: 1273842310 StorageManagement-MIB::agentGetTimeout.0 = INTEGER: 5 StorageManagement-MIB::agentModifiers.0 = INTEGER: 0 StorageManagement-MIB::agentRefreshRate.0 = INTEGER: 300 StorageManagement-MIB::agentMibVersion.0 = STRING: "3.2" StorageManagement-MIB::agentManagementSoftwareURLName.0 = "" StorageManagement-MIB::agentGlobalSystemStatus.0 = INTEGER: nonCritical(4) StorageManagement-MIB::agentLastGlobalSystemStatus.0 = INTEGER: ok(3) StorageManagement-MIB::agentSmartThermalShutdown.0 = INTEGER: notApplicable(3) StorageManagement-MIB::controllerNumber.1 = INTEGER: 1 StorageManagement-MIB::controllerName.1 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::controllerVendor.1 = STRING: "DELL" StorageManagement-MIB::controllerType.1 = INTEGER: sas(6) StorageManagement-MIB::controllerState.1 = INTEGER: degraded(6) StorageManagement-MIB::controllerRebuildRateInPercent.1 = INTEGER: 30 StorageManagement-MIB::controllerFWVersion.1 = STRING: "5.0.2-0003" StorageManagement-MIB::controllerCacheSizeInMB.1 = INTEGER: 256 StorageManagement-MIB::controllerCacheSizeInBytes.1 = INTEGER: 0 StorageManagement-MIB::controllerPhysicalDeviceCount.1 = INTEGER: 5 StorageManagement-MIB::controllerLogicalDeviceCount.1 = INTEGER: 1 StorageManagement-MIB::controllerRollUpStatus.1 = INTEGER: nonCritical(4) StorageManagement-MIB::controllerComponentStatus.1 = INTEGER: nonCritical(4) StorageManagement-MIB::controllerNexusID.1 = STRING: "\\0" StorageManagement-MIB::controllerAlarmState.1 = INTEGER: disabled(2) StorageManagement-MIB::controllerDriverVersion.1 = STRING: "00.00.03.05 " StorageManagement-MIB::controllerPCISlot.1 = STRING: "embedded" StorageManagement-MIB::controllerClusterMode.1 = INTEGER: notApplicable(99) StorageManagement-MIB::controllerMinFWVersion.1 = STRING: "5.2.1-0067" StorageManagement-MIB::controllerMinDriverVersion.1 = STRING: "00.00.03.21" StorageManagement-MIB::controllerChannelCount.1 = INTEGER: 2 StorageManagement-MIB::controllerReconstructRate.1 = INTEGER: 30 StorageManagement-MIB::controllerPatrolReadRate.1 = INTEGER: 30 StorageManagement-MIB::controllerBGIRate.1 = INTEGER: 30 StorageManagement-MIB::controllerCheckConsistencyRate.1 = INTEGER: 30 StorageManagement-MIB::controllerPatrolReadMode.1 = INTEGER: automatic(1) StorageManagement-MIB::controllerPatrolReadState.1 = INTEGER: stopped(1) StorageManagement-MIB::controllerPatrolReadIterations.1 = INTEGER: 162 StorageManagement-MIB::controllerEntry.57.1 = INTEGER: 99 StorageManagement-MIB::controllerEntry.58.1 = INTEGER: 99 StorageManagement-MIB::channelNumber.1 = INTEGER: 1 StorageManagement-MIB::channelNumber.2 = INTEGER: 2 StorageManagement-MIB::channelName.1 = STRING: "Connector 0" StorageManagement-MIB::channelName.2 = STRING: "Connector 1" StorageManagement-MIB::channelState.1 = INTEGER: ready(1) StorageManagement-MIB::channelState.2 = INTEGER: ready(1) StorageManagement-MIB::channelRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::channelRollUpStatus.2 = INTEGER: ok(3) StorageManagement-MIB::channelComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::channelComponentStatus.2 = INTEGER: ok(3) StorageManagement-MIB::channelNexusID.1 = STRING: "\\0\\0" StorageManagement-MIB::channelNexusID.2 = STRING: "\\0\\1" StorageManagement-MIB::channelBusType.1 = INTEGER: sas(8) StorageManagement-MIB::channelBusType.2 = INTEGER: sas(8) StorageManagement-MIB::enclosureNumber.1 = INTEGER: 1 StorageManagement-MIB::enclosureName.1 = STRING: "Backplane" StorageManagement-MIB::enclosureVendor.1 = STRING: "DELL" StorageManagement-MIB::enclosureState.1 = INTEGER: ready(1) StorageManagement-MIB::enclosureProductID.1 = STRING: "BACKPLANE " StorageManagement-MIB::enclosureType.1 = INTEGER: internal(1) StorageManagement-MIB::enclosureChannelNumber.1 = INTEGER: 0 StorageManagement-MIB::enclosureRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::enclosureComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::enclosureNexusID.1 = STRING: "\\0\\0\\0" StorageManagement-MIB::enclosureFirmwareVersion.1 = STRING: "1.00" StorageManagement-MIB::enclosureSASAddress.1 = STRING: "50019090B4C67200" StorageManagement-MIB::arrayDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskName.1 = STRING: "Physical Disk 0:0:0" StorageManagement-MIB::arrayDiskName.2 = STRING: "Physical Disk 0:0:1" StorageManagement-MIB::arrayDiskName.3 = STRING: "Physical Disk 0:0:2" StorageManagement-MIB::arrayDiskName.4 = STRING: "Physical Disk 0:0:3" StorageManagement-MIB::arrayDiskVendor.1 = STRING: "DELL " StorageManagement-MIB::arrayDiskVendor.2 = STRING: "DELL " StorageManagement-MIB::arrayDiskVendor.3 = STRING: "DELL " StorageManagement-MIB::arrayDiskVendor.4 = STRING: "DELL " StorageManagement-MIB::arrayDiskState.1 = INTEGER: online(3) StorageManagement-MIB::arrayDiskState.2 = INTEGER: online(3) StorageManagement-MIB::arrayDiskState.3 = INTEGER: online(3) StorageManagement-MIB::arrayDiskState.4 = INTEGER: online(3) StorageManagement-MIB::arrayDiskProductID.1 = STRING: "ST3146755SS " StorageManagement-MIB::arrayDiskProductID.2 = STRING: "ST3146755SS " StorageManagement-MIB::arrayDiskProductID.3 = STRING: "ST3146755SS " StorageManagement-MIB::arrayDiskProductID.4 = STRING: "ST3146755SS " StorageManagement-MIB::arrayDiskSerialNo.1 = STRING: "3LN0LRL0 " StorageManagement-MIB::arrayDiskSerialNo.2 = STRING: "3LN0JYJS " StorageManagement-MIB::arrayDiskSerialNo.3 = STRING: "3LN0LR0V " StorageManagement-MIB::arrayDiskSerialNo.4 = STRING: "3LN0JH97 " StorageManagement-MIB::arrayDiskRevision.1 = STRING: "T106" StorageManagement-MIB::arrayDiskRevision.2 = STRING: "T106" StorageManagement-MIB::arrayDiskRevision.3 = STRING: "T106" StorageManagement-MIB::arrayDiskRevision.4 = STRING: "T106" StorageManagement-MIB::arrayDiskEnclosureID.1 = STRING: "0" StorageManagement-MIB::arrayDiskEnclosureID.2 = STRING: "0" StorageManagement-MIB::arrayDiskEnclosureID.3 = STRING: "0" StorageManagement-MIB::arrayDiskEnclosureID.4 = STRING: "0" StorageManagement-MIB::arrayDiskChannel.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskChannel.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskChannel.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskChannel.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskLengthInMB.1 = INTEGER: 139392 StorageManagement-MIB::arrayDiskLengthInMB.2 = INTEGER: 139392 StorageManagement-MIB::arrayDiskLengthInMB.3 = INTEGER: 139392 StorageManagement-MIB::arrayDiskLengthInMB.4 = INTEGER: 139392 StorageManagement-MIB::arrayDiskLengthInBytes.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLengthInBytes.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskLengthInBytes.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskLengthInBytes.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInMB.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInMB.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInMB.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInMB.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInBytes.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInBytes.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInBytes.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskLargestContiguousFreeSpaceInBytes.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskTargetID.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskTargetID.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskTargetID.3 = INTEGER: 2 StorageManagement-MIB::arrayDiskTargetID.4 = INTEGER: 3 StorageManagement-MIB::arrayDiskLunID.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLunID.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskLunID.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskLunID.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskUsedSpaceInMB.1 = INTEGER: 139392 StorageManagement-MIB::arrayDiskUsedSpaceInMB.2 = INTEGER: 139392 StorageManagement-MIB::arrayDiskUsedSpaceInMB.3 = INTEGER: 139392 StorageManagement-MIB::arrayDiskUsedSpaceInMB.4 = INTEGER: 139392 StorageManagement-MIB::arrayDiskUsedSpaceInBytes.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskUsedSpaceInBytes.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskUsedSpaceInBytes.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskUsedSpaceInBytes.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInMB.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInMB.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInMB.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInMB.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInBytes.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInBytes.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInBytes.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskFreeSpaceInBytes.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskBusType.1 = INTEGER: sas(8) StorageManagement-MIB::arrayDiskBusType.2 = INTEGER: sas(8) StorageManagement-MIB::arrayDiskBusType.3 = INTEGER: sas(8) StorageManagement-MIB::arrayDiskBusType.4 = INTEGER: sas(8) StorageManagement-MIB::arrayDiskSpareState.1 = INTEGER: notASpare(5) StorageManagement-MIB::arrayDiskSpareState.2 = INTEGER: notASpare(5) StorageManagement-MIB::arrayDiskSpareState.3 = INTEGER: notASpare(5) StorageManagement-MIB::arrayDiskSpareState.4 = INTEGER: notASpare(5) StorageManagement-MIB::arrayDiskRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskRollUpStatus.2 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskRollUpStatus.3 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskRollUpStatus.4 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskComponentStatus.2 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskComponentStatus.3 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskComponentStatus.4 = INTEGER: ok(3) StorageManagement-MIB::arrayDiskNexusID.1 = STRING: "\\0\\0\\0\\0" StorageManagement-MIB::arrayDiskNexusID.2 = STRING: "\\0\\0\\0\\1" StorageManagement-MIB::arrayDiskNexusID.3 = STRING: "\\0\\0\\0\\2" StorageManagement-MIB::arrayDiskNexusID.4 = STRING: "\\0\\0\\0\\3" StorageManagement-MIB::arrayDiskPartNumber.1 = STRING: "SG0DR2381253172FLRL0A00 " StorageManagement-MIB::arrayDiskPartNumber.2 = STRING: "SG0DR2381253172FJYJSA00 " StorageManagement-MIB::arrayDiskPartNumber.3 = STRING: "SG0DR2381253172FLR0VA00 " StorageManagement-MIB::arrayDiskPartNumber.4 = STRING: "SG0DR2381253172FJH97A00 " StorageManagement-MIB::arrayDiskSASAddress.1 = STRING: "5000C50002380201" StorageManagement-MIB::arrayDiskSASAddress.2 = STRING: "5000C50002385B89" StorageManagement-MIB::arrayDiskSASAddress.3 = STRING: "5000C50002385AA9" StorageManagement-MIB::arrayDiskSASAddress.4 = STRING: "5000C500023841E1" StorageManagement-MIB::arrayDiskSmartAlertIndication.1 = INTEGER: no(1) StorageManagement-MIB::arrayDiskSmartAlertIndication.2 = INTEGER: no(1) StorageManagement-MIB::arrayDiskSmartAlertIndication.3 = INTEGER: no(1) StorageManagement-MIB::arrayDiskSmartAlertIndication.4 = INTEGER: no(1) StorageManagement-MIB::arrayDiskManufactureDay.1 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureDay.2 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureDay.3 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureDay.4 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureWeek.1 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureWeek.2 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureWeek.3 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureWeek.4 = STRING: "07" StorageManagement-MIB::arrayDiskManufactureYear.1 = STRING: "2005" StorageManagement-MIB::arrayDiskManufactureYear.2 = STRING: "2005" StorageManagement-MIB::arrayDiskManufactureYear.3 = STRING: "2005" StorageManagement-MIB::arrayDiskManufactureYear.4 = STRING: "2005" StorageManagement-MIB::arrayDiskMediaType.1 = INTEGER: hdd(2) StorageManagement-MIB::arrayDiskMediaType.2 = INTEGER: hdd(2) StorageManagement-MIB::arrayDiskMediaType.3 = INTEGER: hdd(2) StorageManagement-MIB::arrayDiskMediaType.4 = INTEGER: hdd(2) StorageManagement-MIB::arrayDiskEntry.36.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEntry.36.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskEntry.36.3 = INTEGER: 1 StorageManagement-MIB::arrayDiskEntry.36.4 = INTEGER: 1 StorageManagement-MIB::arrayDiskEntry.40.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.40.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.40.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.40.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.41.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.41.2 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.41.3 = INTEGER: 0 StorageManagement-MIB::arrayDiskEntry.41.4 = INTEGER: 0 StorageManagement-MIB::arrayDiskEnclosureConnectionNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskEnclosureConnectionNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskEnclosureConnectionNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskName.1 = STRING: "Physical Disk 0:0:0" StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskName.2 = STRING: "Physical Disk 0:0:1" StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskName.3 = STRING: "Physical Disk 0:0:2" StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskName.4 = STRING: "Physical Disk 0:0:3" StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskEnclosureConnectionArrayDiskNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureName.1 = STRING: "Backplane" StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureName.2 = STRING: "Backplane" StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureName.3 = STRING: "Backplane" StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureName.4 = STRING: "Backplane" StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureNumber.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureNumber.3 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionEnclosureNumber.4 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionControllerName.1 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::arrayDiskEnclosureConnectionControllerName.2 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::arrayDiskEnclosureConnectionControllerName.3 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::arrayDiskEnclosureConnectionControllerName.4 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::arrayDiskEnclosureConnectionControllerNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionControllerNumber.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionControllerNumber.3 = INTEGER: 1 StorageManagement-MIB::arrayDiskEnclosureConnectionControllerNumber.4 = INTEGER: 1 StorageManagement-MIB::batteryNumber.1 = INTEGER: 1 StorageManagement-MIB::batteryName.1 = STRING: "Battery 0" StorageManagement-MIB::batteryVendor.1 = STRING: "DELL" StorageManagement-MIB::batteryState.1 = INTEGER: ready(1) StorageManagement-MIB::batteryRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::batteryComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::batteryNexusID.1 = STRING: "\\0\\0" StorageManagement-MIB::batteryPredictedCapacity.1 = INTEGER: ready(2) StorageManagement-MIB::batteryNextLearnTime.1 = INTEGER: 21 StorageManagement-MIB::batteryLearnState.1 = INTEGER: idle(16) StorageManagement-MIB::batteryEntry.13.1 = INTEGER: 0 StorageManagement-MIB::batteryMaxLearnDelay.1 = INTEGER: 168 StorageManagement-MIB::batteryConnectionNumber.1 = INTEGER: 1 StorageManagement-MIB::batteryConnectionBatteryName.1 = STRING: "Battery 0" StorageManagement-MIB::batteryConnectionBatteryNumber.1 = INTEGER: 1 StorageManagement-MIB::batteryConnectionControllerName.1 = STRING: "PERC 5/i Integrated" StorageManagement-MIB::batteryConnectionControllerNumber.1 = INTEGER: 1 StorageManagement-MIB::virtualDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::virtualDiskName.1 = STRING: "Virtual Disk 0" StorageManagement-MIB::virtualDiskDeviceName.1 = STRING: "/dev/sda" StorageManagement-MIB::virtualDiskState.1 = INTEGER: ready(1) StorageManagement-MIB::virtualDiskLengthInMB.1 = INTEGER: 278784 StorageManagement-MIB::virtualDiskLengthInBytes.1 = INTEGER: 0 StorageManagement-MIB::virtualDiskWritePolicy.1 = INTEGER: writeBack(3) StorageManagement-MIB::virtualDiskReadPolicy.1 = INTEGER: noReadAhead(5) StorageManagement-MIB::virtualDiskCachePolicy.1 = INTEGER: not-applicable(99) StorageManagement-MIB::virtualDiskLayout.1 = INTEGER: raid-10(10) StorageManagement-MIB::virtualDiskCurStripeSizeInMB.1 = INTEGER: 0 StorageManagement-MIB::virtualDiskCurStripeSizeInBytes.1 = INTEGER: 65536 StorageManagement-MIB::virtualDiskTargetID.1 = INTEGER: 0 StorageManagement-MIB::virtualDiskRollUpStatus.1 = INTEGER: ok(3) StorageManagement-MIB::virtualDiskComponentStatus.1 = INTEGER: ok(3) StorageManagement-MIB::virtualDiskNexusID.1 = STRING: "\\0\\0" StorageManagement-MIB::virtualDiskArrayDiskType.1 = INTEGER: sas(1) StorageManagement-MIB::virtualDiskEntry.23.1 = INTEGER: 2 StorageManagement-MIB::virtualDiskEntry.24.1 = INTEGER: 0 StorageManagement-MIB::arrayDiskLogicalConnectionNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskLogicalConnectionNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskLogicalConnectionNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskName.1 = STRING: "Physical Disk 0:0:0" StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskName.2 = STRING: "Physical Disk 0:0:1" StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskName.3 = STRING: "Physical Disk 0:0:2" StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskName.4 = STRING: "Physical Disk 0:0:3" StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskNumber.2 = INTEGER: 2 StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskNumber.3 = INTEGER: 3 StorageManagement-MIB::arrayDiskLogicalConnectionArrayDiskNumber.4 = INTEGER: 4 StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskName.1 = STRING: "Virtual Disk 0" StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskName.2 = STRING: "Virtual Disk 0" StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskName.3 = STRING: "Virtual Disk 0" StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskName.4 = STRING: "Virtual Disk 0" StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskNumber.1 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskNumber.2 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskNumber.3 = INTEGER: 1 StorageManagement-MIB::arrayDiskLogicalConnectionVirtualDiskNumber.4 = INTEGER: 1

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >