Search Results

Search found 4324 results on 173 pages for 'stream'.

Page 57/173 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • why is port 500 in use and how can I free it? VPNC error

    - by kirill_igum
    i tried to use network manager to connect to my university's vpn; it didn't work. then i used a command line vpnc: > sudo vpnc [sudo] password for kirill: Enter IPSec gateway address: vpn.net.**.edu Enter IPSec ID for vpn.net.**.edu: ** Enter IPSec secret for **@vpn.net.**.edu: Enter username for vpn.net.**.edu: ** Enter password for **@vpn.net.**.edu: vpnc: Error binding to source port. Try '--local-port 0' Failed to bind to 0.0.0.0:500: Address already in use then i did this: sudo vpnc --local-port 0 with the same config and it all worked. i'd like to be able to use network manager gui to connect to vpn. I wanted to find out which program uses the port 500: > sudo netstat -a |grep 500 tcp 0 0 *:17500 *:* LISTEN udp 0 0 *:4500 *:* udp 0 0 *:17500 *:* unix 3 [ ] STREAM CONNECTED 63500 unix 3 [ ] STREAM CONNECTED 12500 @/tmp/.X11-unix/X0 there is nothing that uses 500 i'm using ubuntu 10.10 on thinkpad x201t

    Read the article

  • Problem with installing Pear (XAMPP)

    - by sanders
    Hello I installed the latest version of XAMPP (1.7.4) on my windows xp system. Now when i want to install Pear: k:\xampp\php>go-pear.bat I am confronted with the following error: manifest cannot be larger than 100 MB in phar "K:\xampp\php\PEAR\go-pear.phar"PH P Warning: require_once(phar://go-pear.phar/index.php): failed to open stream: phar error: invalid url or non-existent phar "phar://go-pear.phar/index.php" in K:\xampp\php\PEAR\go-pear.phar on line 1236 Warning: require_once(phar://go-pear.phar/index.php): failed to open stream: pha r error: invalid url or non-existent phar "phar://go-pear.phar/index.php" in K:\ xampp\php\PEAR\go-pear.phar on line 1236 Press any key to continue Line 1236 on in the go-pear.phar is this: require_once 'phar://go-pear.phar/index.php'; __HALT_COMPILER();< And after the last < there is a weird character sign. And if i take away that charachter I can't it doesn't help. She the image below for the character. Any help is very much appreciated.

    Read the article

  • Running a small IPTV station

    - by nixterrimus
    I'm looking to run an iptv station for my dorm. I know I can serve multicast so that's not a problem. The station will serve out podcasts and other cc licensed content. The target endpoint is xbmc- a media center. So far I know that I need to serve an rtp stream over udp that's streaming an mpeg-4 avc main or high profile with aac ( or ac3 ?) audio. I've had some luck using vlc with vlm to stream but it seems limited. What are my other options?  Everything has to run on Linux- hopefully open source. How can I use playlists and not live streams? What are my software options?

    Read the article

  • Encoding in Scene Builder

    - by Agafonova Victoria
    I generate an FXML file with Scene Builder. I need it to contain some cirillic text. When i edit this file with Scene Builder i can see normal cirillic letters (screen 1) After compileing and running my program with this FXML file, i'll see not cirillic letters, but some artefacts (screen 2) But, as you can see on the screen 3, its xml file encoding is UTF-8. Also, you can see there that it is saved in ANSI. I've tried to open it with other editors (default eclipse and sublime text 2) and they shoen wrong encoding either. (screen 4 and screen 5) At first i've tried to convert it from ansi to utf-8 (with notepad++). After that eclipse and sublime text 2 started display cirillic letters as they must be. But. Scene builder gave an error, when i've tried to open this file with it: Error loading file C:\eclipse\workspace\equification\src\main\java\ru\igs\ava\equification\test.fxml. C:\eclipse\workspace\equification\src\main\java\ru\igs\ava\equification\test.fxml:1: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. And java compiler gave me an error: ??? 08, 2012 8:11:03 PM javafx.fxml.FXMLLoader logException SEVERE: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. /C:/eclipse/workspace/equification/target/classes/ru/igs/ava/equification/test.fxml:1 at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at ru.igs.ava.equification.EquificationFX.start(EquificationFX.java:22) at com.sun.javafx.application.LauncherImpl$5.run(Unknown Source) at com.sun.javafx.application.PlatformImpl$4.run(Unknown Source) at com.sun.javafx.application.PlatformImpl$3.run(Unknown Source) at com.sun.glass.ui.win.WinApplication._runLoop(Native Method) at com.sun.glass.ui.win.WinApplication.access$100(Unknown Source) at com.sun.glass.ui.win.WinApplication$2$1.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception in Application start method Exception in thread "main" java.lang.RuntimeException: Exception in Application start method at com.sun.javafx.application.LauncherImpl.launchApplication1(Unknown Source) at com.sun.javafx.application.LauncherImpl.access$000(Unknown Source) at com.sun.javafx.application.LauncherImpl$1.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: javafx.fxml.LoadException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at javafx.fxml.FXMLLoader.load(Unknown Source) at ru.igs.ava.equification.EquificationFX.start(EquificationFX.java:22) at com.sun.javafx.application.LauncherImpl$5.run(Unknown Source) at com.sun.javafx.application.PlatformImpl$4.run(Unknown Source) at com.sun.javafx.application.PlatformImpl$3.run(Unknown Source) at com.sun.glass.ui.win.WinApplication._runLoop(Native Method) at com.sun.glass.ui.win.WinApplication.access$100(Unknown Source) at com.sun.glass.ui.win.WinApplication$2$1.run(Unknown Source) ... 1 more Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1] Message: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(Unknown Source) at javax.xml.stream.util.StreamReaderDelegate.next(Unknown Source) ... 14 more So, i've converted it back to ANSI. And, having this file in ANSI, changed its "artefacted" text to cirillic letters manually. Now i can see normal text when i run my program, but when i open this fixed file via Scene Builder, Scene Builder shows me some "artefacted" text (screen 7). So, how can i fix this situation?

    Read the article

  • xamlparser error after clickonce deployment.Application crashing after installation

    - by black sensei
    Hello Good People, I've built an WPF application with visual studio 2008 and created an installer for it.Works fine so far.I realized it lacks the automatic updates feature, and after trying several solutions, i decided to give a try to clickonce deployment.After a successful deployment on a network server, i 've noticed that the application crashes after installation of the downloaded app.It complains about this: Cannot create instance of 'Login' defined in assembly 'MyApplication, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. Exception has been thrown by the target of an invocation. Error in markup file 'MyApplication;component/login.xaml' Line 1 Position 9. here is the stacktrace at System.Windows.Markup.XamlParseException.ThrowException(String message, Exception innerException, Int32 lineNumber, Int32 linePosition, Uri baseUri, XamlObjectIds currentXamlObjectIds, XamlObjectIds contextXamlObjectIds, Type objectType) at System.Windows.Markup.XamlParseException.ThrowException(ParserContext parserContext, Int32 lineNumber, Int32 linePosition, String message, Exception innerException) at System.Windows.Markup.BamlRecordReader.ThrowExceptionWithLine(String message, Exception innerException) at System.Windows.Markup.BamlRecordReader.CreateInstanceFromType(Type type, Int16 typeId, Boolean throwOnFail) at System.Windows.Markup.BamlRecordReader.GetElementAndFlags(BamlElementStartRecord bamlElementStartRecord, Object& element, ReaderFlags& flags, Type& delayCreatedType, Int16& delayCreatedTypeId) at System.Windows.Markup.BamlRecordReader.BaseReadElementStartRecord(BamlElementStartRecord bamlElementRecord) at System.Windows.Markup.BamlRecordReader.ReadElementStartRecord(BamlElementStartRecord bamlElementRecord) at System.Windows.Markup.BamlRecordReader.ReadRecord(BamlRecord bamlRecord) at System.Windows.Markup.BamlRecordReader.Read(Boolean singleRecord) at System.Windows.Markup.TreeBuilderBamlTranslator.ParseFragment() at System.Windows.Markup.TreeBuilder.Parse() at System.Windows.Markup.XamlReader.LoadBaml(Stream stream, ParserContext parserContext, Object parent, Boolean closeStream) at System.Windows.Application.LoadBamlStreamWithSyncInfo(Stream stream, ParserContext pc) at System.Windows.Application.LoadComponent(Uri resourceLocator, Boolean bSkipJournaledProperties) at System.Windows.Application.DoStartup() at System.Windows.Application.<.ctorb__0(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at System.Windows.Threading.Dispatcher.Invoke(DispatcherPriority priority, Delegate method, Object arg) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.RunInternal(Window window) at System.Windows.Application.Run(Window window) at System.Windows.Application.Run() at myApplication.App.Main() here is just the region the debugger is pointing to <Window x:Class="MyApplication.Login" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:src="clr-namespace:MyApplication" xmlns:UI="clr-namespace:UI;assembly=UI" Title="My Application" Height="400" Width="550" ResizeMode="NoResize" WindowStyle="ThreeDBorderWindow" WindowStartupLocation="CenterScreen" Name="Logine" Loaded="Logine_Loaded" Closed="Logine_Closed" Icon="orLogo.ico"> But the installer version as in the msi from setup project works fine.so i cannot see where the error is comming from since i can have design view. Question 1 : Does any one have a similar issue, or is that a known issue? Question 2 : If it's a known issue then what are alternative.I might give up on the clickonce but then i my automatic update feature will be lost (as in there is none which is not ovekill or seriously outdated that i can find right now). thanks for reading this and for pointing me to the right direction.

    Read the article

  • TIME_WAIT connections not being cleaned up after timeout period expires

    - by Mark Dawson
    I am stress testing one of my servers by hitting it with a constant stream of new network connections, the tcp_fin_timeout is set to 60, so if I send a constant stream of something like 100 requests per second, I would expect to see a rolling average of 6000 (60 * 100) connections in a TIME_WAIT state, this is happening, but looking in netstat (using -o) to see the timers, I see connections like: TIME_WAIT timewait (0.00/0/0) where their timeout has expired but the connection is still hanging around, I then eventually run out of connections. Anyone know why these connections don't get cleaned up? If I stop creating new connections they do eventually disappear but while I am constantly creating new connections they don't, seems like the kernel isn't getting chance to clean them up? Is there some other config options I need to set to remove the connections as soon as they have expired? The server is running Ubuntu and my web server is nginx. Also it has iptables with connection tracking, not sure if that would cause these TIME_WAIT connections to live on. Thanks Mark.

    Read the article

  • Using nginx's proxy_redirect when the response location's domain varies

    - by Chalky
    I am making an web app using SoundCloud's API. Requesting an MP3 to stream involves two requests. I'll give an example. Firstly: http://api.soundcloud.com/tracks/59815100/stream This returns a 302 with a temporary link to the actual MP3 (which varies each time), for example: http://ec-media.soundcloud.com/xYZk0lr2TeQf.128.mp3?ff61182e3c2ecefa438cd02102d0e385713f0c1faf3b0339595667fd0907ea1074840971e6330e82d1d6e15dd660317b237a59b15dd687c7c4215ca64124f80381e8bb3cb5&AWSAccessKeyId=AKIAJ4IAZE5EOI7PA7VQ&Expires=1347621419&Signature=Usd%2BqsuO9wGyn5%2BrFjIQDSrZVRY%3D The issue I had was that I am attempting to load the MP3 via JavaScript's XMLHTTPRequest, and for security reasons the browser can't follow the 302, as ec-media.soundcloud.com does not set a header saying it is safe for the browser to access via XMLHTTPRequest. So instead of using the SoundCloud URL, I set up two locations in nginx, so the browser only interacts with the server my app is hosted on and no security errors come up: location /soundcloud/tracks/ { # rewrite URL to match api.soundcloud.com's URL structure rewrite \/soundcloud\/tracks\/(\d*) /tracks/$1/stream break; proxy_set_header Host api.soundcloud.com; proxy_pass http://api.soundcloud.com; # the 302 will redirect to /soundcloud/media instead of the original domain proxy_redirect http://ec-media.soundcloud.com /soundcloud/media; } location /soundcloud/media/ { rewrite \/soundcloud\/media\/(.*) /$1 break; proxy_set_header Host ec-media.soundcloud.com; proxy_pass http://ec-media.soundcloud.com; } So myserver/soundcloud/tracks/59815100 returns a 302 to /myserver/soundcloud/media/xYZk0lr2TeQf.128.mp3...etc, which then forwards the MP3 on. This works! However, I have hit a snag. Sometimes the 302 location is not ec-media.soundcloud.com, it's ak-media.soundcloud.com. There are possibly even more servers out there and presumably more could appear at any time. Is there any way I can handle an arbitrary 302 location without having to manually enter each possible variation? Or is it possible for nginx to handle the redirect and return the response of the second step? So myserver/soundcloud/tracks/59815100 follows the 302 behind the scenes and returns the MP3? The browser automatically follows the redirect, so I can't do anything with the initial response on the client side. I am new to nginx and in a bit over my head so apologies if I've missed something obvious, or it's beyond the scope of nginx. Thanks a lot for reading.

    Read the article

  • How to get better video quality in Lync?

    - by sinned
    I want to use a ConferenceCam from Logitech to stream talks live via Lync. When I view the RAW webcam image via VLC, the quality is very good (but the latency is high because of buffering). However, when I stream it using Lync, the video gets blurry. Is there a way to ensure QoS in Lync or otherwise improve the video quality to (near-)native? I would rather have some dropped frames than a lower resolution where I can't read the slides. In my setup, I use Lync with an Office365-E3 contract, so I have no Lync-Server in my network. I thought about replacing Lync completely with VLC, but I first want to try Lync because VLC will probably cause firewall issues. Also, I haven't looked up the VLC parameters for less buffering, faster encoding, a bit lower resolution (natively it's more than HD) and streaming.

    Read the article

  • MySQL query (over SSL) fails in IIS 7 using default AppPool identity

    - by Jon Tackabury
    I am trying to run a website locally in Windows 7 under IIS 7. I have the AppPool configured to use "Classic" mode, but connecting to a MySQL DB that requires SSL fails. If I change the identity to my user account it works perfectly. It fails when using the default "ApplicationPoolIdentity" account. Is there something I'm missing somewhere? Why would running a MySQL query over SSL fail for certain user accounts? Update: This is the exception that the MySQL Connector is throwing: "Reading from the stream has failed. Attempted to read past the end of the stream."

    Read the article

  • How to solve a deallocated connection in iPhone SDK 3.1.3? - Streams - CFSockets

    - by Christian
    Hi everyone, Debugging my implementation I found a memory leak issue. I know where is the issue, I tried to solve it but sadly without success. I will try to explain you, maybe someone of you can help with this. First I have two classes involved in the issue, the publish class (where publishing the service and socket configuration is done) and the connection (where the socket binding and the streams configuration is done). The main issue is in the connection via native socket. In the 'publish' class the "server" accepts a connection with a callback. The callback has the native-socket information. Then, a connection with native-socket information is created. Next, the socket binding and the streams configuration is done. When those actions are successful the instance of the connection is saved in a mutable array. Thus, the connection is established. static void AcceptCallback(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { Publish *rePoint = (Publish *)info; if ( type != kCFSocketAcceptCallBack) { return; } CFSocketNativeHandle nativeSocketHandle = *((CFSocketNativeHandle *)data); NSLog(@"The AcceptCallback was called, a connection request arrived to the server"); [rePoint handleNewNativeSocket:nativeSocketHandle]; } - (void)handleNewNativeSocket:(CFSocketNativeHandle)nativeSocketHandle{ Connection *connection = [[[Connection alloc] initWithNativeSocketHandle:nativeSocketHandle] autorelease]; // Create the connection if (connection == nil) { close(nativeSocketHandle); return; } NSLog(@"The connection from the server was created now try to connect"); if ( ! [connection connect]) { [connection close]; return; } [clients addObject:connection]; //save the connection trying to avoid the deallocation } The next step is receive the information from the client, thus a read-stream callback is triggered with the information of the established connection. But when the callback-handler tries to use this connection the error occurs, it says that such connection is deallocated. The issue here is that I don't know where/when the connection is deallocated and how to know it. I am using the debugger, but after some trials, I don't see more info. void myReadStreamCallBack (CFReadStreamRef stream, CFStreamEventType eventType, void *info) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [[(Connection *)info retain] autorelease]; // The error -[Connection retain]: message sent to deallocated instance 0x1f5ef0 (Where 0x1f5ef0 is the reference to the established connection) [handlerEv readStreamHandleEvent:stream andEvent:eventType]; [pool drain]; } void myWriteStreamCallBack (CFWriteStreamRef stream, CFStreamEventType eventType, void *info){ NSAutoreleasePool *p = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [[(Connection *)info retain] autorelease]; //Sometimes the error also happens here, I tried without the pool, but it doesn't help neither. [handlerEv writeStreamHandleEvent:eventType]; [p drain]; } Something strange is that when I run the debugger(with breakpoints) everything goes well, the connection is not deallocated and the callbacks work fine and the server is able to receive the message. I will appreciate any hint!

    Read the article

  • One codec to rule them all

    - by AngryHacker
    I am streaming videos in my house via Windows Media Player Streaming, which is basically DLNA. So theoretically any DLNA compliant device can pick up the stream. However, I've quickly found that this is only one part of the solution. Over the years I've accumulated a ton of video-capable devices. While all these devices can see the Windows Media Player stream, they all speak in different codecs. And frankly, I am confused by codecs. In the beginning, I thought that the codecs were defined by the filename extension they carried (e.g. avi, mp4, wmv, etc...), but after further research, it looks like the extensions are simply containers. Inside an .avi file could reside several different codecs. So my question is this: is there a format/codec that plays equally well on any device.

    Read the article

  • Capture live streaming

    - by acidzombie24
    I want to capture a rtmp stream. The videos are live, different every day and usually i cant tune in because i am busy at work doing something :(. I would like to capture the stream however they use anti capturing techniques (its live and free so i dont understand why). I tried orbit downloader without any luck. The url seems kind of weird (judging by grab++). It has || in it and other urls. What applications can i use to capture this? i am open to using linux

    Read the article

  • Intentionally hogging Wi-Fi bandwidth?

    - by endolith
    I've noticed that Wi-Fi signals are interfering with a product I'm developing, and I'd like to generate as much Wi-Fi noise as possible for testing purposes. Is there any better solution than, say, dragging large files from one computer to another? Ideally I'd like one computer to just generate a stream of data ex nihilo and stream it to the other computer where it will just be obliterated, so it hogs bandwidth without reading or writing the hard drives. I'm in Windows, though, so there's no /dev/random or /dev/null. And it would be cool if I could vary the bandwidth, too, but not necessary.

    Read the article

  • Clickonce installation fails after addition of WCF service project

    - by Ant
    So I have a winform solution, deployed via clickonce. Eveything worked fine until i added a WCF project. (see error in parsing the manifest file at end of post) Now I notice that MSBuild compiles the service into a _PublishedWebsites dir. I don't know what the need for this is, but I am suspecting this is the cause of the problem. This wcf project references some other projects within the solution. I am actually hosting the wcf service within the application so I don't really need MSBuild to do all this for me. Any ideas? ===================================================================================== PLATFORM VERSION INFO Windows : 5.1.2600.131072 (Win32NT) Common Language Runtime : 2.0.50727.3603 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3603 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : file:///C:/applications/abc/dev/abc.Application.application IDENTITIES Deployment Identity : Flow Management System.app, Version=1.4.0.0, Culture=neutral, PublicKeyToken=8453086392175e0f, processorArchitecture=msil APPLICATION SUMMARY * Installable application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of C:\applications\abc\dev\abc.Application.application resulted in exception. Following failure messages were detected: + Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. + Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: + Exception from HRESULT: 0x80070C81 COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [12/03/2010 6:33:53 PM] : Activation of C:\applications\abc\dev\abc.Application.application has started. * [12/03/2010 6:33:53 PM] : Processing of deployment manifest has successfully completed. * [12/03/2010 6:33:53 PM] : Installation of the application has started. ERROR DETAILS Following errors were detected during this operation. * [12/03/2010 6:33:53 PM] System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Exception reading manifest from file:///C:/applications/abc/dev/1.4.0.0/abc.Application.exe.manifest: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) at System.Deployment.Application.DownloadManager.DownloadManifest(Uri& sourceUri, String targetPath, IDownloadNotification notification, DownloadOptions options, ManifestType manifestType, ServerInformation& serverInformation) at System.Deployment.Application.DownloadManager.DownloadApplicationManifest(AssemblyManifest deploymentManifest, String targetDir, Uri deploymentUri, IDownloadNotification notification, DownloadOptions options, Uri& appSourceUri, String& appManifestPath) at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) --- Inner Exception --- System.Deployment.Application.InvalidDeploymentException (ManifestParse) - Parsing and DOM creation of the manifest resulted in error. Following parsing errors were noticed: -HRESULT: 0x80070c81 Start line: 0 Start column: 0 Host file: - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) at System.Deployment.Application.Manifest.AssemblyManifest..ctor(FileStream fileStream) at System.Deployment.Application.ManifestReader.FromDocument(String localPath, ManifestType manifestType, Uri sourceUri) --- Inner Exception --- System.Runtime.InteropServices.COMException - Exception from HRESULT: 0x80070C81 - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.CreateCMSFromXml(Byte[] buffer, UInt32 bufferSize, IManifestParseErrorCallback Callback, Guid& riid) at System.Deployment.Application.Manifest.AssemblyManifest.LoadCMSFromStream(Stream stream) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • How to read the statistics in Media Player Classic?

    - by netvope
    I understand that the two numbers under bitrate are the average bitrate and the current bitrate of the stream. But what are the two numbers under buffers? I suppose the second one is the amount of data loaded in memory, but what is the first number? The amount of data decoded? Also, why are there a jitter and a sync offset? (For your reference, here stream 0-6 are video, audio track 1, audio track 2, subtitle track 1 and subtitle track 2.)

    Read the article

  • Mac/iPhone:Streaming video file to iPhone

    - by user34157
    Hi, I am redirecting my question to superuser from stackoverflow: I have a http streaming link which gives me .flv streaming feed. I want to convert that and access in my iPhone program. How can i do that? I want to have a desktop software like VLC and input this streaming feed URL and convert to iPhone supported and stream again to iPhone. I tried VLC with H.264 and Mpeg-1 audio, but seems to be it doesn't give the supported format, so as iPhone program doesn't play the video. Could someone please guide me how can i setup a desktop software which can stream iPhone supported file? Thanks in advance.

    Read the article

  • How to install PHP, Pear, PECL, and APC with Homebrew on Mac OS X?

    - by Andrew
    I'm trying to install APC for PHP 5.3 in the easiest way possible. I love Homebrew so I started down that route. I was able to install PHP 5.3.6 with this command: brew install https://github.com/adamv/homebrew-alt/raw/master/duplicates/php.rb --with-mysql I think this is supposed to install PHP, Pear, and PECL. It seems to install these just fine. Now when I try to install APC: $ pecl install apc downloading APC-3.1.9.tgz ... Starting to download APC-3.1.9.tgz (155,540 bytes) .................................done: 155,540 bytes Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in PackageFile.php on line 305 Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 Fatal error: require_once(): Failed opening required 'Archive/Tar.php' (include_path='/usr/local/Cellar/php/5.3.6/lib/php') in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 How can I fix this?

    Read the article

  • VIDEO Streaming - How to output the video timestamp?

    - by Emmanuel Brunet
    I would like to backup an ASF video stream with the time stamp (I mean the original recording date/time) on the output stream ? Usage I convert / store my video using the mkv format (Matroska) with libx264 (video) and aac (audio) codecs. Assume the IP camera webcam user account is account, the password password $ ffmpeg -i http://admin:alpha1237@webcam/videostream.asf -c:v libx264 -s 768X432 -crf 13 -b:v 2500K -pix_fmt yuv420p -c:a libfdk_aac output.mkv This works fine on a tenvis JPT3815W camera How to I need to get the video timestamp available for display as a subtitle or other meta data field managed by standard video players, and ideally to be able to hide it or not during video reading. Does anybody knows how to achieve that ? Thanks in advance for your help.

    Read the article

  • Share a USB sound card over a network/bluetooth (Mac & PC)

    - by AlexW
    I've been wondering how I can stream audio to an external Edirol USB sound card, wirelessly, on both Mac and PC. I'm not looking for high quality transmission, just to play mp3s from my Mac laptop to a USB sound card that is attached to two very nice balanced studio reference monitors. Is there any way I can firstly power the sound card box, and secondly, provide with an audio stream along it's USB input. I've looked at the Belkin USB hub, and I have a Time Capsule with the AirPort interface inside. These things seem to do vaguely what I want but when it comes to audio, the specifications are less clear. Any suggestions very welcome.

    Read the article

  • Compress file to bytes for uploading to SQL Server

    - by Chris
    I am trying to zip files to an SQL Server database table. I can't ensure that the user of the tool has write priveledges on the source file folder so I want to load the file into memory, compress it to an array of bytes and insert it into my database. This below does not work. class ZipFileToSql { public event MessageHandler Message; protected virtual void OnMessage(string msg) { if (Message != null) { MessageHandlerEventArgs args = new MessageHandlerEventArgs(); args.Message = msg; Message(this, args); } } private int sourceFileId; private SqlConnection Conn; private string PathToFile; private bool isExecuting; public bool IsExecuting { get { return isExecuting; } } public int SourceFileId { get { return sourceFileId; } } public ZipFileToSql(string pathToFile, SqlConnection conn) { isExecuting = false; PathToFile = pathToFile; Conn = conn; } public void Execute() { isExecuting = true; byte[] data; byte[] cmpData; //create temp zip file OnMessage("Reading file to memory"); FileStream fs = File.OpenRead(PathToFile); data = new byte[fs.Length]; ReadWholeArray(fs, data); OnMessage("Zipping file to memory"); MemoryStream ms = new MemoryStream(); GZipStream zip = new GZipStream(ms, CompressionMode.Compress, true); zip.Write(data, 0, data.Length); cmpData = new byte[ms.Length]; ReadWholeArray(ms, cmpData); OnMessage("Saving file to database"); using (SqlCommand cmd = Conn.CreateCommand()) { cmd.CommandText = @"MergeFileUploads"; cmd.CommandType = CommandType.StoredProcedure; //cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = data; cmd.Parameters.Add("@File", SqlDbType.VarBinary).Value = cmpData; SqlParameter p = new SqlParameter(); p.ParameterName = "@SourceFileId"; p.Direction = ParameterDirection.Output; p.SqlDbType = SqlDbType.Int; cmd.Parameters.Add(p); cmd.ExecuteNonQuery(); sourceFileId = (int)p.Value; } OnMessage("File Saved"); isExecuting = false; } private void ReadWholeArray(Stream stream, byte[] data) { int offset = 0; int remaining = data.Length; float Step = data.Length / 100; float NextStep = data.Length - Step; while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; if (remaining < NextStep) { NextStep -= Step; } } } }

    Read the article

  • How do I capture my second monitor using avconv?

    - by Codemonkey
    With this command: avconv -f x11grab -s 2560x1440 -i :0.0 I can stream video from my main monitor. I also have a second, 1080p monitor on which I do my gaming. I want to stream from that monitor. This doesn't work: avconv -f x11grab -s 1920x1080 -i :0.1 I assume I have to use -i :0.0 and somehow specify that it should capture 1920x1080 pixels from X position 2560 and Y position 0. My gaming monitor is placed to the right of my main monitor. Unfortunately the man page for avconv is miles long, so I haven't had any luck figuring this out on my own. I have tried Using -vf with crop like this: -vcodec libx264 ... -vf "crop=$IN_WIDTH:$IN_HEIGHT:2560:360" But that only displayed 1080p video from the top left corner of my main display.

    Read the article

  • configuring slime in emacs

    - by CodeKingPlusPlus
    I am in the process of configuring slime for emacs. So far I have read about basic functionality for common lisp such as C-c C-q which invokes the command slime-close-parens-at-point which places the proper number of parens where your mouse is. Another command that seemed cool was invoked by C-c C-c and it would pass the code you are editing in a buffer to the REPL, and "compile" it. Why won't these commands work for me? Anyway, I have downloaded slime via M-x list-packages and do not seem to have this functionality (C-h w and then any of these commands tells me that these commands do note exist). So, I saw a bunch of other slime extensions such as slime-repl', 'slime-fuzzy' and 'hippie-expand-slime'. So I again usedM-x list-packages` and downloaded them. Still I did not have these commands. Here is the content of my emacs file relevant to slime: ;;;Common Lisp and Slime (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-20130626.1151") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-repl-201000404") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/hippie-expand-slime-20130226.1656") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-fuzzy-20100404") (require 'slime) (setq slime-lisp-implementations `((sbcl ("/usr/bin/sbcl")) (ecl ("/usr/bin/ecl")) (clisp ("/usr/bin/clisp" "-q -I")))) (require 'slime-repl) (require 'slime-fuzzy) (require 'hippie-expand-slime) When I execute M-x slime I get the following message in the inferior-lisp buffer where I can execute common lisp code (however, shouldn't this be the slime-repl since I required it?): STYLE-WARNING: redefining EMACS-INSPECT (#<BUILT-IN-CLASS T>) in DEFMETHOD STYLE-WARNING: Implicitly creating new generic function STREAM-READ-CHAR-WILL-HANG-P. WARNING: These Swank interfaces are unimplemented: (DISASSEMBLE-FRAME SLDB-BREAK-AT-START SLDB-BREAK-ON-RETURN) ;; Swank started at port: 46533. Then a slime-error buffer is created with the contents: Invalid protocol message: Symbol "CREATE-REPL" not found in the SWANK package. Line: 1, Column: 28, File-Position: 28 Stream: #<SB-IMPL::STRING-INPUT-STREAM {10056B9C33}> (:emacs-rex (swank:create-repl nil) "COMMON-LISP-USER" t 5) How should I modify my emacs file to give me the functionality of those commands? In my emacs file am I not loading the necessary files? Do I need to install an additional package? If you need more information let me know! All help is much appreciated!

    Read the article

  • DVD playback with Windows Media Player 11 works fine, but when copied to HDD and then played back, t

    - by stakx
    I have several DVDs with short documentaries on it. Since the notebook I'm using (a Dell Latitude E6400) has only one DVD drive, and I might play back those short movies very often, I thought of copying them to the HDD and playing them back from there. However, I've run into a problem, namely stuttering audio. Problem description: When I play back these movies directly from DVD (with Windows Media Player 11 under Windows Vista), everything works fine. Smooth video, no significant audio problems (only the occasional click). But as soon as I copy any of these DVDs to the HDD and try to play them back from there (e.g. using the wmpdvd://drive/title/chapter?contentdir=path protocol, I get stuttering audio — audio playback sounds like a machine gun for a third of a second or so, approx. every 8 seconds. I have tried converting the VOB files from the DVD to another format (ie. ripping), but that resulted in a noticeable downgrade of picture quality. Therefore I thought it best to keep the files in their original format, if possible. Still, I suspect that the stuttering audio is due to some (de-)muxing problem, and that changing the file format might help. (After all, video playback is fine; therefore I don't think that the hardware is too slow for playback.) Only thing is, I don't know how to convert the VOB files to another Windows Media Player-compatible format without quality loss. I hope someone can help me, or give me further pointers on things I could try out to get HDD playback to work without the problem described. Some things I've tried so far, without any success: VOB2MPG, in order to convert the .vob file to a .mpg file. But that changes only the A/V container, not the content. No re-encoding takes place at all. Re-encoding with MPlayer/MEncoder. Lots of quality loss there, and I frankly haven't got the time to test all possible settings combinations available. Disabling all plug-ins, equalizers, etc. in Windows Media Player. Disabling all hardware acceleration on the audio playback device. Further info on the VOB files I'm trying to playback: The video format is MPEG ES, PAL 720x576 pixels @ 24/25 frames per second. The sound stream is uncompressed PCM, 16-bit stereo @ 48kHz. (Might it help if I somehow re-encoded the sound stream at a lower resolution, or as an MP3? If so, how would I do this without changing the video stream?) P.S.: I am limited to using Windows Media Player (11). (I previously tried MPlayer btw., but the video playback quality was surprisingly bad.)

    Read the article

  • Artefacts during HDV capture

    - by Jakub Konecki
    When I try to capture HDV video stream from a Canon HV20 camera to a Dell Studio 1558 (Intel i7 720qm) laptop running Win 7 Home Premium x64, the image appears with a very annoying noise - it contains block artefacts. Something is wrong with the Fire Wire controller (Ricoh 1304 OHCI host controller, build-in). Tried changing the driver to legacy one to no avail. This does not happen with DV stream. I'm using a Belkin cable. The capture works OK on another machine.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >