Search Results

Search found 977 results on 40 pages for 'grant archibald'.

Page 13/40 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Problem with icacls on Windows 2003: "Acl length is incorrect"

    - by Andrew J. Brehm
    I am confused by the output of icacls on Windows 2003. Everything appears to work on Windows 2008. I am trying to change permissions on a directory: icacls . /grant mydomain\someuser:(OI)(CI)(F) This results in the following error: .: Acl length is incorrect. .: An internal error occurred. Successfully processed 0 files; Failed processing 1 files The same command used on a file named "file" works: icacls file /grant mydomain\someuser:(OI)(CI)(F) Result is: processed file: file Successfully processed 1 files; Failed processing 0 files What's going on?

    Read the article

  • How do I create statistics to make ‘small’ objects appear ‘large’ to the Optmizer?

    - by Maria Colgan
    I recently spoke with a customer who has a development environment that is a tiny fraction of the size of their production environment. His team has been tasked with identifying problem SQL statements in this development environment before new code is released into production. The problem is the objects in the development environment are so small, the execution plans selected in the development environment rarely reflects what actually happens in production. To ensure the development environment accurately reflects production, in the eyes of the Optimizer, the statistics used in the development environment must be the same as the statistics used in production. This can be achieved by exporting the statistics from production and import them into the development environment. Even though the underlying objects are a fraction of the size of production, the Optimizer will see them as the same size and treat them the same way as it would in production. Below are the necessary steps to achieve this in their environment. I am using the SH sample schema as the application schema who's statistics we want to move from production to development. Step 1. Create a staging table, in the production environment, where the statistics can be stored Step 2. Export the statistics for the application schema, from the data dictionary in production, into the staging table Step 3. Create an Oracle directory on the production system where the export of the staging table will reside and grant the SH user the necessary privileges on it. Step 4. Export the staging table from production using data pump export Step 5. Copy the dump file containing the stating table from production to development Step 6. Create an Oracle directory on the development system where the export of the staging table resides and grant the SH user the necessary privileges on it.  Step 7. Import the staging table into the development environment using data pump import Step 8. Import the statistics from the staging table into the dictionary in the development environment. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • Setting WMI permissions remotely

    - by christianlinnell
    I've developed a tool that does a simple retrieval of registered services and installed applications from remote Windows Server 2003 servers via WMI. My problem is, the tool needs to be run on an ad hoc basis by a user who is not an administrator of those servers. I've created a domain user (which the tool will use to run the query) that I'd like to grant remote WMI permission on each server, but given there are about 200 servers, I can't do it manually. Is there a way to grant access to that domain user via WMI, or by distributing a registry change via SMS or Group Policy?

    Read the article

  • DENY select on sys.dm_db_index_physical_stats

    - by steveh99999
    Technorati Tags: security,DMV,permission,sys.dm_db_index_physical_stats I recently saw an interesting blog article by Paul Randal about the performance overhead of querying the sys.dm_db_index_physical_stats. So I was thinking, would it be possible to let non-sysadmin users query DMVs on a SQL server but stop them querying this I/O intensive DMV ? Yes it is, here’s how… 1. Create a new login for test purposes, with permissions to access AdventureWorks database only … CREATE LOGIN [test] WITH PASSWORD='xxxx', DEFAULT_DATABASE=[AdventureWorks] GO USE [AdventureWorks] GO CREATE USER [test] FOR LOGIN [test] WITH DEFAULT_SCHEMA=[dbo] GO 2.login as user test and issue command SELECT  * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),NULL,NULL,NULL,'DETAILED') gets error :-  Msg 297, Level 16, State 12, Line 1 The user does not have permission to perform this action. 3.As a sysadmin, issue command :- USE AdventureWorks GRANT VIEW DATABASE STATE TO [test] or GRANT VIEW SERVER STATE TO [test] if all databases can be queried via DMV. 4. Try again as user test to issue command SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks '),NULL,NULL,NULL,'DETAILED') -- now produces valid results from the DMV.. 5 now create the test user in master database, public role only USE master CREATE USER [test] FOR LOGIN [test] 6 issue command :- USE master DENY SELECT ON sys.dm_db_index_physical_stats TO [test] 7 Now go back to AdventureWorks using test login and try SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks’),NULL,NULL,NULL,’DETAILED') Now gets error... Msg 229, Level 14, State 5, Line 1 The SELECT permission was denied on the object 'dm_db_index_physical_stats', database 'mssqlsystemresource', schema 'sys'. but the user is still able to query all other non-IO-intensive DMVs. If the user attempts to view the index physical stats via a builtin management studio report  – see recent blog post by Pinal Dave they get an error also

    Read the article

  • Setting user calendar permissions on Exchange 2007

    - by blizz
    We have Exchange 2007 with about 100 users. I would like to change everyone's free/busy permissions to grant Reviewer status to a specific AD group. I have tried PFDAVAdmin tool but when I commit any changes, they do not affect the users. If I grant myself Reviewer permissions to another user's calendar using the tool, I still cannot view that user's free/busy details, and I also don't show up on the list of people with permissions on that user's Outlook calendar options. It seems like PFDAVAdmin simply appears to do something, but doesn't actually change anything. Is there any other way for me to accomplish what I need to do? Or is there something I may not be doing right with PFDAVAdmin? FYI I have followed directions from this link: http://exchangeshare.wordpress.com/2008/05/27/faq-give-calendar-read-permission-on-all-mailboxes-pfdavadmin/

    Read the article

  • how to enable remote access to a MySQL server on an AZURE virtual machine

    - by Rees
    I have an AZURE virtual machine with a MySQL server installed on it running ubuntu 13.04. I am trying to remote connect to the MySQL server however get the simple error "Can't connect to MySQL server on {IP}" I have already done the follow: * commented out the bind-address within the /etc/mysql/my.cnf * commented out skip-external-locking within the same my.cnf * "ufw allow mysql" * "iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT" * setup an AZURE endpoint for mysql * "sudo netstat -lpn | grep 3306" does indeed show mysql LISTENING * "GRANT ALL ON *.* TO remote@'%' IDENTIFIED BY 'password'; * "GRANT ALL ON *.* TO remote@'localhost' IDENTIFIED BY 'password'; * "/etc/init.d/mysql restart" * I can connect via SSH tunneling, but not without it * I have spun up an identical ubuntu 13.04 server on rackspace and SUCCESSFULLY connected using the same procedures outlined here. NONE of the above works on my azure server however. I thought the creation of an endpoint would work, but no luck. Any help please? Is there something I'm missing entirely?

    Read the article

  • Enterprise user management

    - by Eduardo
    I am looking for an enterprise user management system that meets these requirements: Delegated user administration: The group manager should be able to grant access to his supervised employees (without having to contact any administrator either to grant access or maybe create users). A group manager should be able to create other groups and restrict any permission he already has where he can add supervised employees. If a manager removes access to a supervised group, then all the subgroups will also lose access. Web based User Interface. LDAP interface to query users and groups (or may not exist at all if it is integrated in a single application). Do you know if there are any system that meet these requirements?

    Read the article

  • ORACLE Fusion Middleware Summer Camps in Lisbon: Includes Advanced ADF Training by Oracle Product Management

    - by Frank Nimphius
    From July 9th - JUly 13th 2012, Frank Nimphius and Grant Ronald from the JDeveloper and ADF Product Management team present an 4 1/2 day advanced ADF Training Lisbon for the EMEA Oracle SOA Community. The training runs in parallel to an advanced WebCenter training giving you a chance to network with your peers during breaks and lunch. See below announcement by the ORACLE EMEA SOA Community for details:For Specialized partners who are working on following projects & opportunities, we (Oracle) offer these advanced summer camps:    ADF 11g     WebCenter Portal     SOA Suite 11g     ADF for BPM Suite 11     WebCenter Sites 11g All training sessions will be from HQ product management and our PTS team. The sessions will take place in July in Lisbon Portugal and Munich Germany. . Participation is limited to two people per company and bootcamp. Registration is handled by first come first serve, please pay attention to the skill requirements, the pre-requisitions and the follow up! We will not accept people onto the training who do not match the criteria!Lisbon: Monday, July 9th 11:00AM - Friday July 13th 16:00 PM (Lisbon time) ADF 11g advanced training by Grant Ronald and Frank Nimphius WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata Cost: Free of charge, cancelation or no-show fee 2.000€Bootcamps are limited to 20 persons first come first serveFor details and registration please visit Lisbon registration page

    Read the article

  • Commands don't have permission when using absolute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance. Edit: It seems that the root of the problem is acl set on perent folder /srv/samba, which indeed does not grant permissions to deluge (but neither denies it). getfacl /srv/samba # file: srv/samba # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:mask::rwx default:other::--- If I grant the permission also to this folder, it suddenly starts to work so I believe that the acl on /srv/samba is somehow denying the permissions to deluge. So the question is: how do I set acl to both /srv/samba and /srv/samba/video so that sambaclients have access to whole /srv/samba and subdirectories and deluge has access only to /srv/samba/video and subdirectories?

    Read the article

  • Changing startup parameters for MySQL

    - by RN
    I need to remove skip-networking from MySQL startup parameters I am running MySQL on Linux on Centos on a VPS Can someone please tell a newbie how to do this ? I suppose to start and stop the mySQL server, I have to do something like this /etc/init.d/mysqld stop /etc/init.d/mysqld start ps -ef|grep 'mysql' root 11331 20220 0 10:53 pts/0 00:00:00 grep mysql root 32452 1 0 Apr02 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --skip-grant-tables --skip-networking mysql 32504 32452 0 Apr02 ? 00:00:18 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock --skip-grant-tables --skip-networking

    Read the article

  • Bluetooth mouse Logitech M555b not recognized on a Macbook Pro 8.2

    - by Pierre
    I just bought a Logitech M555b Bluetooth mouse today. My computer (a Macbook Pro 8,2) has a bluetooth connection, and it works perfectly fine on Mac OSX (I set up a new device, click the connect button on the mouse, and voilà, done in 5 seconds). I cannot get it to work on Ubuntu 12.04. When I follow the same steps as on Mac OSX under Ubuntu 12.04, Sometimes I will see the mouse name appearing for 1 second on the screen, and if I'm fast enough to click on "Continue", I will have a message telling me the pairing with my mouse failed. Here are the steps I follow: Turn off Bluetooth on Ubuntu Turn Bluetooth on. Bluetooth icon Set up new device... Turn on the mouse, and press the Connect button. Press "Continue" on the Bluetooth New Device Setup screen I see the mouse name for one second, but then it disappear. Click on PIN options... to select '0000'. ... nothing, Ubuntu won't see anything. In the /var/log/syslog, I have the following information: May 27 23:26:16 Trane bluetoothd[896]: HCI dev 0 down May 27 23:26:16 Trane bluetoothd[896]: Adapter /org/bluez/896/hci0 has been disabled May 27 23:26:58 Trane bluetoothd[896]: HCI dev 0 up May 27 23:26:58 Trane bluetoothd[896]: Adapter /org/bluez/896/hci0 has been enabled May 27 23:26:58 Trane bluetoothd[896]: Inquiry Cancel Failed with status 0x12 May 27 23:28:26 Trane bluetoothd[896]: Refusing input device connect: No such file or directory (2) May 27 23:28:26 Trane bluetoothd[896]: Refusing connection from 00:1F:20:24:15:3D: setup in progress May 27 23:28:26 Trane bluetoothd[896]: Discovery session 0x7f3409ab5b70 with :1.84 activated I tried for an hour, I restarted my computer, re-paired the mouse on Mac OSX, etc. Sometimes when I restart the computer, I have a dialog box showing up saying that the mouse M555b wants to access some resources or something, if I click "Grant", or even if I tick "always grant access", nothing happens... How can I get the Bluetooth to work properly? Help!

    Read the article

  • How can I change my mysql user that has all privileges on a database to only have select privileges on one specific table?

    - by Glenn
    I gave my mysql user the "GRANT ALL PRIVILEGES ON database_name.* to my_user@localhost" treatment. Now I would like to be more granular, starting with lowering privileges on a specific table. I am hoping mysql has or can be set to follow a "least amount of privileges" policy, so I can keep the current setup and lower it for the one table. But I have not seen anything like this in the docs or online. Other than removing the DB level grant and re-granting on a table level, is there a way to get the same result by adding another rule?

    Read the article

  • icacls, Network Service, and setting ACLs on Windows Server 2008

    - by Ted
    Setting ACLs on Windows Server 2008 via the command line is giving me some problems. As per http://web2.minasi.com/forum/topic.asp?TOPIC%5FID=26907 I've tried all sorts of variations: C:\Windows\system32icacls "D:\Websites\site.com\Web\bin*" /grant 'NT A uthority\NETWORK SERVICE: (OI) (CI)M' C:\Windows\system32icacls "D:\Websites\site.com\Web\bin*" /grant "NETWORK SERVICE": (OI) (CI)M And all variations in between. However, each try leads to i.e. "Invalid parameter "'NETWORK'"" depending on the variation above. As per http://technet.microsoft.com/en-us/library/cc753525%28WS.10%29.aspx (see in comments), it appears that others have experienced the same issue where the same command works on Windows 7/Vista/etc., but not on Windows Server 2008. What's the best way to apply permissions to Network Service account on a directory and/or files via the command line in Windows Server 2008? Especially as there's no way to do multiple file permissions at once via the GUI (see http://serverfault.com/questions/30991/windows-server-2008-change-security-settings-for-multiple-files-at-once).

    Read the article

  • More Denali Execution Plan Warning Goodies

    - by Dave Ballantyne
    In my last blog, I showed how the execution plan in denali has been enhanced by 2 new warnings ,conversion affecting cardinality and conversion affecting seek, which are shown when a data type conversion has happened either implicitly or explicitly. That is not all though, there is more .  Also added are two warnings when performance has been affected due to memory issues. Memory spills to tempdb are a costly operation and happen when SqlServer is under memory pressure and needs to free some up. For a long time you have been able to see these as warnings in a profiler trace as a sort or hash warning event,  but now they are included right in the execution plan.  Not only that but also you can see which operator caused the spill , not just which statement.  Pretty damn handy. Another cause of performance problems relating to memory are memory grant waits.  Here is an informative write up on them,  but simply speaking , SQLServer has to allocate a certain amount of memory for each statement. If it is unable to you get a “memory grant wait”.  Once again there are other methods of analyzing these,  but the plan now shows these too. Don't worry that’s not real production code There is one other new warning that is of interest to me, “Unmatched Indexes”.  Once I find out the conditions under which that fires ill blog about it.

    Read the article

  • AIX 7.1 su root password bug?

    - by exxoid
    In our AIX 7.1 machine there is a weird bug we've ran into.. If you are logged into the AIX box via SSH as a regular user and you try to su - you get prompted for the password, lets say our password is "P@$$w0rd23", you can type "P@$$w0rd2ANYTHING" and it will still grant you root. As long as you have "P@$$w0rd2" it will grant you root regardless of what else you specify in the authentication and even though the actual password is "P@$$w0rd23". This seems to be a bug? Anyone see anything like this before? Thanks.

    Read the article

  • Create a super user in MySQL 5.5 not working: Permission denied for root@localhost

    - by GHarping
    Using CentOS 6, logged in to MySQL as root, entering the command: create user 'user123' identified by 'pass123'; works fine. But when I try and give that user super user privileges with: grant all on *.* to 'user123' identified by 'pass123'; I get the error: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) Then select * from mysql.user; shows that root has Y in all columns, so should have all privileges. I'd be very grateful if anyone could help me find why root is unable to grant privileges as I can't see why it wouldn't be working. Thanks

    Read the article

  • nagios ldap-group based front end login permission issues

    - by Eleven-Two
    I want to grant users access to the nagios 3 core frontend by using an active directory group ("NagiosWebfrontend" in the code below). The login works fine like this: AuthType Basic AuthName "Nagios Access" AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPURL "ldap://ip-address:389/OU=user-ou,DC=domain,DC=tld?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN CN=LDAP-USER,OU=some-ou,DC=domain,DC=tld AuthLDAPBindPassword the_pass Require ldap-group CN=NagiosWebfrontend,OU=some-ou,DC=domain,DC=tld Unfortunately, every nagios page just shows "It appears as though you do not have permission to view information for any of the services you requested...". I got the hint, that I am missing a contact in nagios configuration which is equal to my login, but creating one with the same name as the domain user had no effect on this issue. However, it would be great to find a solution without manually editing nagios.conf for every new user, so the admins could grant access to nagios by just putting the user to "NagiosWebfrontend" group. What would be the best way to solve it?

    Read the article

  • Batch script to create home home directories from list of names

    - by Steven
    I'm trying to create a home directories with permissions from a text file. I can only get the batch file to run the first line. Can anyone tell me why? I initiate the scripts by running go.bat as administrator. go.bat @echo for /f %%a in (users1.txt) do call test.bat %%a test.bat @echo off m: cd \ mkdir %1 icacls %1 /grant %1:(OI)(CI)M cd %1 mkdir public icacls public /inheritance:d icacls public / All:(OI)(CI)(RD) icacls public /grant All:(OI)(CI)R mkdir private icacls private /inheritance:d icacls private /remove All cd \ users1.txt user1 user2 user3

    Read the article

  • elastix cdr stop working

    - by dreddko
    CDR was working before 19 march. Unfortunately i dont remember what kind of changes i made to configuration, but this exactly not changes to CDR config. elastix 2.4.0 asterisk 11.7.0 mysql 5.0.95 elastix*CLI> cdr show status Call Detail Record (CDR) settings ---------------------------------- Logging: Disabled Mode: Simple /etc/asterisk/cdr.conf [general] enable=yes unanswered = yes /etc/asterisk/cdr_mysql.conf [global] hostname = localhost dbname=asteriskcdrdb password = *MYPASSWROD* user = asteriskcdruser userfield=1 ;port=3306 ;sock=/tmp/mysql.sock loguniqueid=yes mysql> SHOW GRANTS FOR 'asteriskcdruser'@'localhost'; +-----------------------------------------------------------------------------------------------+ | Grants for asteriskcdruser@localhost | +-----------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'asteriskcdruser'@'localhost' IDENTIFIED BY PASSWORD 'HASHHERE' | | GRANT ALL PRIVILEGES ON `asteriskcdrdb`.* TO 'asteriskcdruser'@'localhost' | +-----------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • After creating a mysql user with all privileges, the user cannot create databases in phpMyAdmin and only sees information_schema table

    - by GHarping
    This is a recurring problem for some reason... Using mysql 5.5, I am simply trying to create a user that can connect to the database remotely, have access to all databases, and create databases. I have created a user using: create user 'dev'@'%' identified by 'abcdefg'; then granted all permissions using: GRANT ALL ON *.* to 'dev'@'192.168.%' IDENTIFIED BY 'abcdefg' WITH GRANT OPTION; and the result is that the user cannot create databases, and can only see information_schema database for some reason. Databases Create database: Documentation No Privileges Database Ascending information_schema Total: 1 Does anyone know why this might be happening?

    Read the article

  • BinaryWrite exception "OutputStream is not available when a custom TextWriter is used" in MVC 2 ASP.

    - by Grant
    Hi, I have a view rendering a stream using the response BinaryWrite method. This all worked fine under ASP.NET 4 using the Beta 2 but throws this exception in the RC release: "HttpException" , "OutputStream is not available when a custom TextWriter is used." <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage" %> <%@ Import Namespace="System.IO" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { if (ViewData["Error"] == null) { Response.Buffer = true; Response.Clear(); Response.ContentType = ViewData["DocType"] as string; Response.AddHeader("content-disposition", ViewData["Disposition"] as string); Response.CacheControl = "No-cache"; MemoryStream stream = ViewData["DocAsStream"] as MemoryStream; Response.BinaryWrite(stream.ToArray()); Response.Flush(); Response.Close(); } } </script> </script> The view is generated from a client side redirect (jquery replace location call in the previous page using Url.Action helper to render the link of course). This is all in an iframe. Anyone have an idea why this occurs?

    Read the article

  • Why is this RMagick call generating a segmentation fault?

    - by Grant Heaslip
    I've been banging my head against the wall for the better part of an hour trying to figure out what's going wrong here, and I'm sure (or rather hoping) it's something fairly obvious that I'm overlooking. I'm using Ruby 1.9.1, Sinatra 1.0, and RMagick 2.13.1. ImageMagick and RMagick are correctly installed and functional—I've successfully manipulated and saved images from irb. The relevant part of the params array (formatting changes for the sake of readability): {"admin_user_new_image_file"=> { :filename=>"freddie-on-shetland-pony.png", :type=>"image/png", :name=>"admin_user_new_image_file", :tempfile=>#<File:/var/folders/a7/a7pO5jMcGLCww9XBGRvWfE+++TI/-Tmp-/RackMultipart20100514-20700-o2tkqu-0>, :head=>"Content-Disposition: form-data; name=\"admin_user_new_image_file\"; filename=\"freddie-on-shetland-pony.png\"\r\nContent-Type: image/png\r\n" } } The relevant code: post "/admin/user/:account_name/image/new/" do if params[:admin_user_new_image_file][:tempfile] thumbnail = Magick::Image.read("png:"+params[:admin_user_new_image_file][:tempfile].path).first end end The error (line 229 is the line starting with "thumbnail = ": config.ru:229: [BUG] Segmentation fault ruby 1.9.1p376 (2009-12-07 revision 26041) [i386-darwin10.3.0] -- control frame ---------- c:0042 p:---- s:0196 b:0196 l:000195 d:000195 CFUNC :read c:0041 p:0121 s:0192 b:0192 l:001ab8 d:000191 LAMBDA config.ru:229 c:0040 p:---- s:0189 b:0189 l:000188 d:000188 FINISH c:0039 p:---- s:0187 b:0187 l:000186 d:000186 CFUNC :call c:0038 p:0018 s:0184 b:0184 l:001d78 d:000183 BLOCK /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865 c:0037 p:---- s:0182 b:0182 l:000181 d:000181 FINISH c:0036 p:---- s:0180 b:0180 l:000179 d:000179 CFUNC :instance_eval c:0035 p:0016 s:0177 b:0175 l:000174 d:000174 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521 c:0034 p:0024 s:0171 b:0171 l:000148 d:000170 BLOCK /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500 c:0033 p:---- s:0169 b:0169 l:000168 d:000168 FINISH c:0032 p:---- s:0167 b:0167 l:000166 d:000166 CFUNC :catch c:0031 p:0140 s:0163 b:0163 l:000148 d:000162 BLOCK /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497 c:0030 p:---- s:0154 b:0154 l:000153 d:000153 FINISH c:0029 p:---- s:0152 b:0152 l:000151 d:000151 CFUNC :each c:0028 p:0073 s:0149 b:0149 l:000148 d:000148 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476 c:0027 p:0076 s:0141 b:0141 l:000140 d:000140 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601 c:0026 p:0009 s:0137 b:0137 l:000138 d:000136 BLOCK /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411 c:0025 p:---- s:0135 b:0135 l:000134 d:000134 FINISH c:0024 p:---- s:0133 b:0133 l:000132 d:000132 CFUNC :instance_eval c:0023 p:0012 s:0130 b:0130 l:000121 d:000129 BLOCK /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566 c:0022 p:---- s:0128 b:0128 l:000127 d:000127 FINISH c:0021 p:---- s:0126 b:0126 l:000125 d:000125 CFUNC :catch c:0020 p:0013 s:0122 b:0122 l:000121 d:000121 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566 c:0019 p:0098 s:0115 b:0115 l:000138 d:000138 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411 c:0018 p:0019 s:0108 b:0108 l:000107 d:000107 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399 c:0017 p:0014 s:0104 b:0104 l:000103 d:000103 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24 c:0016 p:0150 s:0098 b:0098 l:000097 d:000097 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/methodoverride.rb:24 c:0015 p:0031 s:0092 b:0092 l:000091 d:000091 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18 c:0014 p:0018 s:0084 b:0084 l:002080 d:000083 BLOCK /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979 c:0013 p:0032 s:0082 b:0082 l:000081 d:000081 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005 c:0012 p:0011 s:0078 b:0078 l:002080 d:002080 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979 c:0011 p:0100 s:0074 b:0074 l:000ff0 d:000ff0 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/lint.rb:47 c:0010 p:0022 s:0068 b:0068 l:000067 d:000067 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/lint.rb:35 c:0009 p:0014 s:0064 b:0064 l:000063 d:000063 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24 c:0008 p:0031 s:0058 b:0058 l:000057 d:000057 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18 c:0007 p:0014 s:0050 b:0050 l:000049 d:000049 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/content_length.rb:13 c:0006 p:0320 s:0042 b:0042 l:000041 d:000041 METHOD /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/handler/webrick.rb:48 c:0005 p:0256 s:0030 b:0030 l:000029 d:000029 METHOD /usr/local/lib/ruby/1.9.1/webrick/httpserver.rb:111 c:0004 p:0382 s:0020 b:0020 l:000019 d:000019 METHOD /usr/local/lib/ruby/1.9.1/webrick/httpserver.rb:70 c:0003 p:0123 s:0009 b:0009 l:000bc8 d:000008 BLOCK /usr/local/lib/ruby/1.9.1/webrick/server.rb:183 c:0002 p:---- s:0004 b:0004 l:000003 d:000003 FINISH c:0001 p:---- s:0002 b:0002 l:000001 d:000001 TOP --------------------------- -- Ruby level backtrace information----------------------------------------- config.ru:229:in `read' config.ru:229:in `block (2 levels) in <main>' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:865:in `block in route' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in `instance_eval' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:521:in `route_eval' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:500:in `block (2 levels) in route!' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in `catch' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:497:in `block in route!' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in `each' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:476:in `route!' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:601:in `dispatch!' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in `block in call!' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `instance_eval' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `block in invoke' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `catch' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `invoke' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:411:in `call!' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:399:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:in `block in call' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in `synchronize' /usr/local/lib/ruby/gems/1.9.1/gems/sinatra-1.0/lib/sinatra/base.rb:979:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/lint.rb:47:in `_call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/lint.rb:35:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /usr/local/lib/ruby/gems/1.9.1/gems/rack-1.1.0/lib/rack/handler/webrick.rb:48:in `service' /usr/local/lib/ruby/1.9.1/webrick/httpserver.rb:111:in `service' /usr/local/lib/ruby/1.9.1/webrick/httpserver.rb:70:in `run' /usr/local/lib/ruby/1.9.1/webrick/server.rb:183:in `block in start_thread' -- C level backtrace information ------------------------------------------- 0x10010cd8d 0 libruby.dylib 0x000000010010cd8d rb_vm_bugreport + 77 0x10002b184 1 libruby.dylib 0x000000010002b184 report_bug + 260 0x10002b318 2 libruby.dylib 0x000000010002b318 rb_bug + 200 0x1000b7124 3 libruby.dylib 0x00000001000b7124 sigsegv + 132 0x7fff8301c80a 4 libSystem.B.dylib 0x00007fff8301c80a _sigtramp + 26 0x1032313ac 5 libMagickCore.3.dylib 0x00000001032313ac Splay + 300 0x103119245 6 libMagickCore.3.dylib 0x0000000103119245 AcquirePixelCache + 325 0x1031cb317 7 libMagickCore.3.dylib 0x00000001031cb317 AcquireImage + 375 0x10333035b 8 libMagickCore.3.dylib 0x000000010333035b ReadPNGImage + 155 0x1031418fd 9 libMagickCore.3.dylib 0x00000001031418fd ReadImage + 2221 0x101f1b72b 10 RMagick2.bundle 0x0000000101f1b72b rd_image + 339 0x101f1b59b 11 RMagick2.bundle 0x0000000101f1b59b Image_read + 36 0x1000fd0e4 12 libruby.dylib 0x00000001000fd0e4 vm_call_cfunc + 340 0x1000fe9b0 13 libruby.dylib 0x00000001000fe9b0 vm_call_method + 896 0x1000ff8fc 14 libruby.dylib 0x00000001000ff8fc vm_exec_core + 3180 0x100104b93 15 libruby.dylib 0x0000000100104b93 vm_exec + 1203 0x100106643 16 libruby.dylib 0x0000000100106643 rb_vm_invoke_proc + 691 0x100106ccd 17 libruby.dylib 0x0000000100106ccd vm_call0 + 1085 0x1000317c6 18 libruby.dylib 0x00000001000317c6 rb_method_call + 406 0x1000fd0e4 19 libruby.dylib 0x00000001000fd0e4 vm_call_cfunc + 340 0x1000fe9b0 20 libruby.dylib 0x00000001000fe9b0 vm_call_method + 896 0x1000ff8fc 21 libruby.dylib 0x00000001000ff8fc vm_exec_core + 3180 0x100104b93 22 libruby.dylib 0x0000000100104b93 vm_exec + 1203 0x100105ce6 23 libruby.dylib 0x0000000100105ce6 yield_under + 710 0x100106188 24 libruby.dylib 0x0000000100106188 specific_eval + 72 0x1000fd0e4 25 libruby.dylib 0x00000001000fd0e4 vm_call_cfunc + 340 0x1000fe9b0 26 libruby.dylib 0x00000001000fe9b0 vm_call_method + 896 0x1000ff8fc 27 libruby.dylib 0x00000001000ff8fc vm_exec_core + 3180 0x100104b93 28 libruby.dylib 0x0000000100104b93 vm_exec + 1203 0x10010b6bf 29 libruby.dylib 0x000000010010b6bf rb_f_catch + 639 0x1000fd0e4 30 libruby.dylib 0x00000001000fd0e4 vm_call_cfunc + 340 0x1000fe9b0 31 libruby.dylib 0x00000001000fe9b0 vm_call_method + 896 0x1000ff8fc 32 libruby.dylib 0x00000001000ff8fc vm_exec_core + 3180 0x100104b93 33 libruby.dylib 0x0000000100104b93 vm_exec + 1203 0x10010aac9 34 libruby.dylib 0x000000010010aac9 rb_yield + 505 0x100007902 35 libruby.dylib 0x0000000100007902 rb_ary_each + 82 0x1000fd0e4 36 libruby.dylib 0x00000001000fd0e4 vm_call_cfunc + 340 0x1000fe9b0 37 libruby.dylib 0x00000001000fe9b0 vm_call_method + 896 0x1000ff8fc 38 libruby.dylib 0x00000001000ff8fc vm_exec_core + 3180 0x100104b93 39 libruby.dylib 0x0000000100104b93 vm_exec + 1203 0x100105ce6 40 libruby.dylib 0x0000000100105ce6 yield_under + 710 0x100106188 41 libruby.dylib 0x0000000100106188 specific_eval + 72 0x1000fd0e4 42 libruby.dylib 0x00000001000fd0e4 vm_call_cfunc + 340 0x1000fe9b0 43 libruby.dylib 0x00000001000fe9b0 vm_call_method + 896 0x1000ff8fc 44 libruby.dylib 0x00000001000ff8fc vm_exec_core + 3180 0x100104b93 45 libruby.dylib 0x0000000100104b93 vm_exec + 1203 0x10010b6bf 46 libruby.dylib 0x000000010010b6bf rb_f_catch + 639 0x1000fd0e4 47 libruby.dylib 0x00000001000fd0e4 vm_call_cfunc + 340 0x1000fe9b0 48 libruby.dylib 0x00000001000fe9b0 vm_call_method + 896 0x1000ff8fc 49 libruby.dylib 0x00000001000ff8fc vm_exec_core + 3180 0x100104b93 50 libruby.dylib 0x0000000100104b93 vm_exec + 1203 0x100106643 51 libruby.dylib 0x0000000100106643 rb_vm_invoke_proc + 691 0x100111803 52 libruby.dylib 0x0000000100111803 thread_start_func_2 + 835 0x100111921 53 libruby.dylib 0x0000000100111921 thread_start_func_1 + 17 0x7fff82ff58b6 54 libSystem.B.dylib 0x00007fff82ff58b6 _pthread_start + 331 0x7fff82ff5769 55 libSystem.B.dylib 0x00007fff82ff5769 thread_start + 13 [NOTE] You may encounter a bug of Ruby interpreter. Bug reports are welcome. For details: http://www.ruby-lang.org/bugreport.html Abort trap Anyone have any idea what's going on? Thanks!

    Read the article

  • How to allow to allow admins to edit my app's config files without UAC elevation?

    - by Justin Grant
    My company produces a cross-platform server application which loads its configuration from user-editable configuration files. On Windows, config file ACLs are locked down by our Setup program to allow reading by all users but restrict editing to Administrators and Local System only. Unfortunately, on Windows Server 2008, even local administrators no longer have admin privileges (because of UAC) unless they're running an elevated app. This has caused complaints from users who cannot use their favorite text editor to open and save config files changes-- they can open the files (since anyone can read) but can't save. Anyone have recommendations for what we can do (if anything) in our app's Setup to make editing easier for admins on Windows Server 2008? Related questions: if a Windows Server 2008 admin wants to edit an admins-only config file, how does he normally do it? Is he forced to use a text editor which is smart enough to auto-elevate when elevation is needed, like Windows Explorer does in response to access denied errors? Does he launch the editor from an elevated command-prompt window? Something else?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >