Search Results

Search found 30117 results on 1205 pages for 'thread specific storage'.

Page 90/1205 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Linux - specific API reference

    - by Goofy
    Hello! Where can I find centralized and complete documentation aboput Linux - specific API? I'm preparing Linux port of my application and i want to use as much Linux - specific features as it's possible. So far I found that Linux provide epoll and inotify API, which are great news for me, because my program works as network server and monitor local file systems.

    Read the article

  • A good open source web crawler for indexing Specific website for specific contents?

    - by Peeyush
    Hello Please suggest me a good open source web crawler written in C++,JAVA or PHP. i just need to crawl/index some specific websites for specific contents(images,text,videos). i know that their are already a lot of question & answers about this topic on this website but i am a little confused after reading all of them. So i am sorry if i am repeating the same question again. -Thanks in advance

    Read the article

  • Commiting and get specific revision

    - by Vepr
    I work with sharpSVN,i need 2 things: 1) I need to get a revision with specific logmessage 2) When I commit my revision, I want to show every file, that is commiting. I find an event commiting, but I don't know how it works(Commiting and get specific revision)

    Read the article

  • How to write specific application for facebook?

    - by alex
    Hi there! Please help me with documentation to write specific application for Facebook. Need to know, what language to choose? is specific facebook API? is documentation for API? is site with samples catalogue? need to know all related info. Some words about app, it would be app, which compare users interests. Thanks in advance.

    Read the article

  • Extern variable at specific address

    - by AndiNo
    Using C++ and GCC, can I declare an extern variable that uses a specific address in memory? Something like int key attribute((__at(0x9000))); AFAIK this specific option only works on embedded systems. If there is such an option for use on the x86 platform, how can I use it?

    Read the article

  • Taking screenshot of a specific window - C++ / Qt

    - by Switch
    In Qt, how do I take a screenshot of a specific window (i.e. suppose I had Notepad up and I wanted to take a screenshot of the window titled "Untitled - Notepad")? In their screenshot example code, they show how to take a screenshot of the entire desktop: originalPixmap = QPixmap::grabWindow(QApplication::desktop()->winId()); How would I get the winId() for a specific window (assuming I knew the window's title) in Qt? Thanks

    Read the article

  • Getting the callers of a specific function

    - by robUK
    Hello, GNU Emacs 23.1.1 I am just wondering is there any feature in emacs where I can find out what functions call a specific function. In my code, I normally have to do a search on the function name to see what functions calls it. It would be nice if I could display all the names of the functions where this specific function is being called from. many thanks for any suggestions,

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Class Loading Deadlocks

    - by tomas.nilsson
    Mattis follows up on his previous post with one more expose on Class Loading Deadlocks As I wrote in a previous post, the class loading mechanism in Java is very powerful. There are many advanced techniques you can use, and when used wrongly you can get into all sorts of trouble. But one of the sneakiest deadlocks you can run into when it comes to class loading doesn't require any home made class loaders or anything. All you need is classes depending on each other, and some bad luck. First of all, here are some basic facts about class loading: 1) If a thread needs to use a class that is not yet loaded, it will try to load that class 2) If another thread is already loading the class, the first thread will wait for the other thread to finish the loading 3) During the loading of a class, one thing that happens is that the <clinit method of a class is being run 4) The <clinit method initializes all static fields, and runs any static blocks in the class. Take the following class for example: class Foo { static Bar bar = new Bar(); static { System.out.println("Loading Foo"); } } The first time a thread needs to use the Foo class, the class will be initialized. The <clinit method will run, creating a new Bar object and printing "Loading Foo" But what happens if the Bar object has never been used before either? Well, then we will need to load that class as well, calling the Bar <clinit method as we go. Can you start to see the potential problem here? A hint is in fact #2 above. What if another thread is currently loading class Bar? The thread loading class Foo will have to wait for that thread to finish loading. But what happens if the <clinit method of class Bar tries to initialize a Foo object? That thread will have to wait for the first thread, and there we have the deadlock. Thread one is waiting for thread two to initialize class Bar, thread two is waiting for thread one to initialize class Foo. All that is needed for a class loading deadlock is static cross dependencies between two classes (and a multi threaded environment): class Foo { static Bar b = new Bar(); } class Bar { static Foo f = new Foo(); } If two threads cause these classes to be loaded at exactly the same time, we will have a deadlock. So, how do you avoid this? Well, one way is of course to not have these circular (static) dependencies. On the other hand, it can be very hard to detect these, and sometimes your design may depend on it. What you can do in that case is to make sure that the classes are first loaded single threadedly, for example during an initialization phase of your application. The following program shows this kind of deadlock. To help bad luck on the way, I added a one second sleep in the static block of the classes to trigger the unlucky timing. Notice that if you uncomment the "//Foo f = new Foo();" line in the main method, the class will be loaded single threadedly, and the program will terminate as it should. public class ClassLoadingDeadlock { // Start two threads. The first will instansiate a Foo object, // the second one will instansiate a Bar object. public static void main(String[] arg) { // Uncomment next line to stop the deadlock // Foo f = new Foo(); new Thread(new FooUser()).start(); new Thread(new BarUser()).start(); } } class FooUser implements Runnable { public void run() { System.out.println("FooUser causing class Foo to be loaded"); Foo f = new Foo(); System.out.println("FooUser done"); } } class BarUser implements Runnable { public void run() { System.out.println("BarUser causing class Bar to be loaded"); Bar b = new Bar(); System.out.println("BarUser done"); } } class Foo { static { // We are deadlock prone even without this sleep... // The sleep just makes us more deterministic try { Thread.sleep(1000); } catch(InterruptedException e) {} } static Bar b = new Bar(); } class Bar { static { try { Thread.sleep(1000); } catch(InterruptedException e) {} } static Foo f = new Foo(); }

    Read the article

  • How many threads should an Android game use?

    - by kvance
    At minimum, an OpenGL Android game has a UI thread and a Renderer thread created by GLSurfaceView. Renderer.onDrawFrame() should be doing a minimum of work to get the higest FPS. The physics, AI, etc. don't need to run every frame, so we can put those in another thread. Now we have: Renderer thread - Update animations and draw polys Game thread - Logic & periodic physics, AI, etc. updates UI thread - Android UI interaction only Since you don't ever want to block the UI thread, I run one more thread for the game logic. Maybe that's not necessary though? Is there ever a reason to run game logic in the renderer thread?

    Read the article

  • Wicket testing - AnnotApplicationContextMock - There is no application attached to current thread ma

    - by John
    I've written a couple of tests for a small web app, but I get an error when I try to run the page specific tests that makes use of WicketTester. Google sends me to a mailing list for Apache Wicket, where a user experienced the same exception. He/she said the problem was that AnnotApplicationContextMock was initialized before the Wicket Application. I've pasted my WicketApplication class as well. Has any of you dealt with this error before? I've pasted the exception and the class below. Exception: ------------------------------------------------------------------------------- Test set: com.upbeat.shoutbox.web.TestViewShoutsPage ------------------------------------------------------------------------------- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.545 sec (AnnotApplicationContextMock.java:61) at com.upbeat.shoutbox.web.TestViewShoutsPage.setUp(TestViewShoutsPage.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.internal.runners.MethodRoadie.runBefores(MethodRoadie.java:129) at org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:93) at org.unitils.UnitilsJUnit4TestClassRunner$CustomMethodRoadie.runBeforesThenTestThenAfters(UnitilsJUnit4TestClassRunner.java:168) at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:84) at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:49) at org.unitils.UnitilsJUnit4TestClassRunner.invokeTestMethod(UnitilsJUnit4TestClassRunner.java:127) at org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:59) at org.unitils.UnitilsJUnit4TestClassRunner.access$000(UnitilsJUnit4TestClassRunner.java:42) at org.unitils.UnitilsJUnit4TestClassRunner$1.run(UnitilsJUnit4TestClassRunner.java:87) at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at org.unitils.UnitilsJUnit4TestClassRunner.run(UnitilsJUnit4TestClassRunner.java:94) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:62) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:140) at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:127) at org.apache.maven.surefire.Surefire.run(Surefire.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:345) at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1009) My page specific test class: package com.upbeat.shoutbox.web; import org.apache.wicket.application.IComponentInstantiationListener; import org.apache.wicket.protocol.http.WebApplication; import org.apache.wicket.spring.injection.annot.SpringComponentInjector; import org.apache.wicket.spring.injection.annot.test.AnnotApplicationContextMock; import org.apache.wicket.util.tester.FormTester; import org.apache.wicket.util.tester.WicketTester; import org.junit.Before; import org.junit.Test; import org.unitils.spring.annotation.SpringBeanByType; import com.upbeat.shoutbox.WicketApplication; import com.upbeat.shoutbox.integrations.AbstractIntegrationTest; import com.upbeat.shoutbox.persistence.ShoutItemDao; import com.upbeat.shoutbox.services.ShoutService; import com.upbeat.shoutbox.web.pages.ViewShoutsPage; public class TestViewShoutsPage extends AbstractIntegrationTest { @SpringBeanByType private ShoutService svc; @SpringBeanByType private ShoutItemDao dao; protected WicketTester tester; @Before public void setUp() { final AnnotApplicationContextMock appctx = new AnnotApplicationContextMock(); appctx.putBean("ShoutItemDao", dao); appctx.putBean("ShoutService", svc); tester = new WicketTester(new WicketApplication() { @Override protected IComponentInstantiationListener getSpringComponentInjector(WebApplication app) { return new SpringComponentInjector(app, appctx, false); } }); } @Test public void testRenderPage() { tester.startPage(ViewShoutsPage.class); tester.assertRenderedPage(ViewShoutsPage.class); FormTester ft = tester.newFormTester("addShoutForm"); ft.setValue("nickname", "test-nickname"); ft.setValue("content", "a whole lot of content"); ft.submit(); tester.assertRenderedPage(ViewShoutsPage.class); tester.assertContains("test-nickname"); tester.assertContains("a whole lot of content"); } } AbstractIntegrationTest: package com.upbeat.shoutbox.integrations; import org.springframework.context.ApplicationContext; import org.unitils.UnitilsJUnit4; import org.unitils.spring.annotation.SpringApplicationContext; @SpringApplicationContext({"/com/upbeat/shoutbox/spring/applicationContext.xml", "applicationContext-test.xml"}) public abstract class AbstractIntegrationTest extends UnitilsJUnit4 { private ApplicationContext applicationContext; } WicketApplication: package com.upbeat.shoutbox; import org.apache.wicket.application.IComponentInstantiationListener; import org.apache.wicket.protocol.http.WebApplication; import org.apache.wicket.request.target.coding.IndexedParamUrlCodingStrategy; import org.apache.wicket.spring.injection.annot.SpringComponentInjector; import com.upbeat.shoutbox.web.pages.ParamPage; import com.upbeat.shoutbox.web.pages.VeryNiceExceptionPage; /** * Application object for your web application. If you want to run this application without deploying, run the Start class. * * @see com.upbeat.shoutbox.Start#main(String[]) */ public class WicketApplication extends WebApplication { /** * Constructor */ public WicketApplication() { } /** * @see org.apache.wicket.Application#getHomePage() */ public Class getHomePage() { return HomePage.class; } @Override protected void init() { super.init(); // Enable wicket ajax debug getDebugSettings().setAjaxDebugModeEnabled(true); addComponentInstantiationListener(getSpringComponentInjector(this)); // Mount pages mountBookmarkablePage("/home", HomePage.class); mountBookmarkablePage("/exceptionPage", VeryNiceExceptionPage.class); mount(new IndexedParamUrlCodingStrategy("/view_params", ParamPage.class)); } protected IComponentInstantiationListener getSpringComponentInjector(WebApplication app) { return new SpringComponentInjector(app); } }

    Read the article

  • Decoding a jpg in the background in WP7

    - by Shahar Prish
    I have a bunch of apps in the marketplace, and so far I have been able, by changing my functionality or going the extra mile, to work around the issue of being unable to decode a jpg in the background into a WriteableBitmap. I am finding a situation where I can't think of good ways to "work around" the issue. I need to decode the image I get from MediaLibrary, reduce it's resolution to something managable (800x800), rotate it potentially and save to local storage. By far, the thing that takes the most time (80%) is decoding the bitmap to 800x800 - it takes between 700ms to 1000 ms. A user may add 7-10 images when starting, which translates to ~10 seconds of waiting for the images being added. I tried doing this lazily, but at some point you need to pay the piper and the app essentially stutters for ~1000ms at that point and the experience is not great. Is there an alternative I am missing for loading the image in the background somehow? (Note on why CreateOptions.BackgroundCreation is no good for me: It loads the image into a BitmapImage which is great if you want to just use it, but not so great for what I need to do which is create a copy in Isolated Storage).

    Read the article

  • Determining whether compiling on Windows or other system

    - by NumberFour
    Hi, Im currently developing a cross-platform C application. Is there any compiler macro which is defined only during compilation on Windows, so I can #ifdef some Windows specific #includes? Typical example is selecting between WinSock and Berkeley sockets headers: #ifdef _WINDOWS #include <winsock.h> #else #include <sys/socket.h> #include <netinet/in.h> #include <sys/un.h> #include <arpa/inet.h> #include <netdb.h> #endif So the thing Im looking for is something like that _WINDOWS macro. Thanks for any tips.

    Read the article

  • syntax to express mathematical formula concisely in your language of choice

    - by aaa
    hello. I am developing functional domain specific embedded language within C++ to translate formulas into working code as concisely and accurately as possible. Right now my language looks something like this: // implies two nested loops j=0:N, i=0,j (range(i) < j < N)[T(i,j) = (T(i,j) - T(j,i))/e(i+j)]; // implies summation over above expression sum(range(i) < j < N))[(T(i,j) - T(j,i))/e(i+j)]; I am looking for possible syntax improvements/extensions or just different ideas about expressing mathematical formulas as clearly and precisely as possible. Can you give me some syntax examples relating to my question which can be accomplished in your language of choice which consider useful. In particular, if you have some ideas about how to translate the above code segments, I would be happy to hear them Thank you

    Read the article

  • HELP! How can I display one style in Safari and a different style in Chrome?

    - by aWarner
    My client is starting to get antsy....so any help would be greatly appreciated. I am having issues with my secondary page header images shifting. It is displaying correctly in Firefox, I haven't been able to check in IE yet w/out access to a PC. It was displaying correctly in Chrome, but shifting in Safari. I added the "webkit hack" to write a specific css style for Safari, but once I did that....it started shifting in Chrome. What can I do to fix this issue?? http://airwavetelecom.net/beta/?page_id=2

    Read the article

  • Objects with inheritance memory storage

    - by nikitas350
    Say that i have some classes like this example. class A { int k, m; public: A(int a, int b) { k = a; m = b; } }; class B { int k, m; public: B() { k = 2; m = 3; } }; class C : private A, private B { int k, m; public: C(int a, int b) : A(a, b) { k = b; m = a; } }; Now, in a class C object, are the variables stored in a specific way? I know what happens in a POD object, but this is not a POD object...

    Read the article

  • How to disable hiddev96 in linux (or tell it to ignore a specific device)

    - by Miky D
    I'm having problems with a CentOS 5.0 system when using a certain USB device. The problem is that the device advertises itself as a HID device and linux is happy to try to provide support for it: In /ver/log/messages I see a line that reads: hiddev96: USB HID 1.11 Device [KXX USB PRO] on usb-0000:00:1d.0-1 My question comes down to: Is there a way to tell linux to not use hiddev96 for that device in particular? If yes, how? If not, what are my options - can I turn hiddev96 off completely? UPDATE I should probably have been a bit more specific about what is going on. The machine is running Centos 5.0, and on top of it I'm running VMWare workstation with Windows XP - which is where the USB device is actually supposed to operate. All works fine for other USB devices (i.e. VMWare successfully connects the USB device to the guest OS and the OS can use it, but for this particular device VMWare connects it to the guest OS, but the OS can't read/write to it) Every attempt locks up the application that is trying to communicate with the device. I've reason to believe that it is because the device is a HID device and there's some contention between the Linux host and the Windows guest OS in accessing the device. Below is the output from modprobe -l|grep -i hid as requested by @Karolis: # modprobe -l | grep -i hid /lib/modules/2.6.18-53.1.14.el5/kernel/net/bluetooth/hidp/hidp.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetservo.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetkit.ko And here is the output of lsmod # lsmod Module Size Used by udf 76997 1 vboxdrv 65696 0 autofs4 24517 2 hidp 23105 2 rfcomm 42457 0 l2cap 29633 10 hidp,rfcomm tun 14657 0 vmnet 49980 16 vmblock 20512 3 vmmon 945236 0 sunrpc 144253 1 cpufreq_ondemand 10573 1 video 19269 0 sbs 18533 0 backlight 10049 0 i2c_ec 9025 1 sbs button 10705 0 battery 13637 0 asus_acpi 19289 0 ac 9157 0 ipv6 251393 27 lp 15849 0 snd_hda_intel 24025 2 snd_hda_codec 202689 1 snd_hda_intel snd_seq_dummy 7877 0 snd_seq_oss 32577 0 nvidia 7824032 31 snd_seq_midi_event 11073 1 snd_seq_oss snd_seq 49713 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi_event snd_seq_device 11725 3 snd_seq_dummy,snd_seq_oss,snd_seq snd_pcm_oss 42945 0 snd_mixer_oss 19009 1 snd_pcm_oss snd_pcm 72133 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss joydev 13313 0 sg 36061 0 parport_pc 29157 1 snd_timer 24645 2 snd_seq,snd_pcm snd 52421 13 snd_hda_intel,snd_hda_codec,snd_seq_oss,snd_seq,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer ndiswrapper 170384 0 parport 37513 2 lp,parport_pc hci_usb 20317 2 ide_cd 40033 1 tg3 104389 0 i2c_i801 11469 0 bluetooth 53925 8 hidp,rfcomm,l2cap,hci_usb soundcore 11553 1 snd cdrom 36705 1 ide_cd serio_raw 10693 0 snd_page_alloc 14281 2 snd_hda_intel,snd_pcm i2c_core 23745 3 i2c_ec,nvidia,i2c_i801 pcspkr 7105 0 dm_snapshot 20709 0 dm_zero 6209 0 dm_mirror 28741 0 dm_mod 58201 8 dm_snapshot,dm_zero,dm_mirror ahci 23621 4 libata 115833 1 ahci sd_mod 24897 5 scsi_mod 132685 3 sg,libata,sd_mod ext3 123337 3 jbd 56553 1 ext3 ehci_hcd 32973 0 ohci_hcd 23261 0 uhci_hcd 25421 0

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • How to disable hiddev96 in linux (or tell it to ignore a specific device)

    - by Miky D
    I'm having problems with a CentOS 5.0 system when using a certain USB device. The problem is that the device advertises itself as a HID device and linux is happy to try to provide support for it: In /ver/log/messages I see a line that reads: hiddev96: USB HID 1.11 Device [KXX USB PRO] on usb-0000:00:1d.0-1 My question comes down to: Is there a way to tell linux to not use hiddev96 for that device in particular? If yes, how? If not, what are my options - can I turn hiddev96 off completely? UPDATE I should probably have been a bit more specific about what is going on. The machine is running Centos 5.0, and on top of it I'm running VMWare workstation with Windows XP - which is where the USB device is actually supposed to operate. All works fine for other USB devices (i.e. VMWare successfully connects the USB device to the guest OS and the OS can use it, but for this particular device VMWare connects it to the guest OS, but the OS can't read/write to it) Every attempt locks up the application that is trying to communicate with the device. I've reason to believe that it is because the device is a HID device and there's some contention between the Linux host and the Windows guest OS in accessing the device. Below is the output from modprobe -l|grep -i hid as requested by @Karolis: # modprobe -l | grep -i hid /lib/modules/2.6.18-53.1.14.el5/kernel/net/bluetooth/hidp/hidp.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetservo.ko /lib/modules/2.6.18-53.1.14.el5/kernel/drivers/usb/misc/phidgetkit.ko And here is the output of lsmod # lsmod Module Size Used by udf 76997 1 vboxdrv 65696 0 autofs4 24517 2 hidp 23105 2 rfcomm 42457 0 l2cap 29633 10 hidp,rfcomm tun 14657 0 vmnet 49980 16 vmblock 20512 3 vmmon 945236 0 sunrpc 144253 1 cpufreq_ondemand 10573 1 video 19269 0 sbs 18533 0 backlight 10049 0 i2c_ec 9025 1 sbs button 10705 0 battery 13637 0 asus_acpi 19289 0 ac 9157 0 ipv6 251393 27 lp 15849 0 snd_hda_intel 24025 2 snd_hda_codec 202689 1 snd_hda_intel snd_seq_dummy 7877 0 snd_seq_oss 32577 0 nvidia 7824032 31 snd_seq_midi_event 11073 1 snd_seq_oss snd_seq 49713 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi_event snd_seq_device 11725 3 snd_seq_dummy,snd_seq_oss,snd_seq snd_pcm_oss 42945 0 snd_mixer_oss 19009 1 snd_pcm_oss snd_pcm 72133 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss joydev 13313 0 sg 36061 0 parport_pc 29157 1 snd_timer 24645 2 snd_seq,snd_pcm snd 52421 13 snd_hda_intel,snd_hda_codec,snd_seq_oss,snd_seq,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer ndiswrapper 170384 0 parport 37513 2 lp,parport_pc hci_usb 20317 2 ide_cd 40033 1 tg3 104389 0 i2c_i801 11469 0 bluetooth 53925 8 hidp,rfcomm,l2cap,hci_usb soundcore 11553 1 snd cdrom 36705 1 ide_cd serio_raw 10693 0 snd_page_alloc 14281 2 snd_hda_intel,snd_pcm i2c_core 23745 3 i2c_ec,nvidia,i2c_i801 pcspkr 7105 0 dm_snapshot 20709 0 dm_zero 6209 0 dm_mirror 28741 0 dm_mod 58201 8 dm_snapshot,dm_zero,dm_mirror ahci 23621 4 libata 115833 1 ahci sd_mod 24897 5 scsi_mod 132685 3 sg,libata,sd_mod ext3 123337 3 jbd 56553 1 ext3 ehci_hcd 32973 0 ohci_hcd 23261 0 uhci_hcd 25421 0

    Read the article

  • Windows 8 ignores more specific route

    - by Lander
    OS: Windows 8 I have a cabled NIC (connected to router with ip 192.168.1.0) and a WIFI NIC (connected to a router with ip 192.168.1.1) . I want all traffic to go through the cabled NIC, except the 192.168.1.0/8 range should use the wifi-nic. This was working fine in Windows 7, without any manual configuration. In Windows 8 however, it's not. My routing table: =========================================================================== Interface List 14...f2 7b cb 13 e7 f0 ......Microsoft Wi-Fi Direct Virtual Adapter 13...b8 ac 6f 54 d2 5c ......Realtek PCIe FE Family Controller 12...f0 7b cb 13 e7 f0 ......Dell Wireless 1397 WLAN Mini-Card 1...........................Software Loopback Interface 1 15...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 16...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.198 30 0.0.0.0 0.0.0.0 192.168.0.1 192.168.0.233 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.0 On-link 192.168.0.233 276 192.168.0.233 255.255.255.255 On-link 192.168.0.233 276 192.168.0.255 255.255.255.255 On-link 192.168.0.233 276 192.168.1.0 255.255.255.0 192.168.1.1 192.168.1.198 31 192.168.1.198 255.255.255.255 On-link 192.168.1.198 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.0.233 276 224.0.0.0 240.0.0.0 On-link 192.168.1.198 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.0.233 276 255.255.255.255 255.255.255.255 On-link 192.168.1.198 286 =========================================================================== Persistent Routes: None I added the rule for 192.168.1.0. I would think Windows should use this rule for the IP 192.168.1.1 because it's more specific than the default-route. However it's not: C:\Windows\system32>tracert 192.168.1.1 Tracing route to 192.168.1.1 over a maximum of 30 hops 1 58 ms 4 ms 4 ms 192.168.0.1 2 68 ms 12 ms 11 ms ^C So... What do I do wrong? And how can I make Windows use the wireless NIC for 192.168.1.0/8

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >