Search Results

Search found 8762 results on 351 pages for 'flash hash'.

Page 33/351 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • flash player: usage

    - by Syom
    i use Adobe flash player in my site, and now i need to increment some filed in database, when user click on player. here is the script <div id="conteiner" style="text-align: center;" ></div> <script type="text/javascript"> var s1 = new SWFObject("player.swf","ply","420","380","9","#FFFFFF"); s1.addParam("allowfullscreen","true"); s1.addParam("allowscriptaccess","always"); s1.addParam("flashvars","file=<?=$video ?>"); s1.write("conteiner"); </script> i deside to use ajax for it, but how can i write a function in the flash object? thanks in advance UPDATE: i only have the swfobject.js file, which contains such data if(typeof deconcept=="undefined"){var deconcept=new Object();} if(typeof deconcept.util=="undefined"){deconcept.util=new Object();} if(typeof deconcept.SWFObjectUtil=="undefined"){deconcept.SWFObjectUtil=new Object();}deconcept.SWFObject=function(_1,id,w,h,_5,c,_7,_8,_9,_a){if(!document.getElementById){return;}this.DETECT_KEY=_a?_a:"detectflash";this.skipDetect=deconcept.util.getRequestParameter(this.DETECT_KEY);this.params=new Object();this.variables=new Object();this.attributes=new Array();if(_1){this.setAttribute("swf",_1);}if(id){this.setAttribute("id",id);}if(w){this.setAttribute("width",w);}if(h){this.setAttribute("height",h);}if(_5){this.setAttribute("version",new deconcept.PlayerVersion(_5.toString().split(".")));}this.installedVer=deconcept.SWFObjectUtil.getPlayerVersion();if(!window.opera&&document.all&&this.installedVer.major>7) ... and the player.swf, and the html, i've shown allready. i don't now is this flash player or no, and what can i do?

    Read the article

  • Flash video slooow in AIR 2 HTMLLoader component

    - by shane
    I am working on a full screen kiosk application in Flex 4/Air 2 using Flash Builder 4. We have a company training website which staff can access via the kiosk, and the main content is interactive flash training videos. Our target machines are by no means 'beefy', they are Atom n270s @ 1.6Ghz with 1Gb RAM. As it stands the videos are all but unusable when used from within the Air application, the application becomes completely unresponsive (100% cpu usage, click events take approx 5-10 seconds to register). So far I have tried: increasing the default frame rate from 24fps to 60. No improvement. nativeWindow.stage.frameRate = 60; running the videos in a stripped down version of my app, just a full screen HTMLLoader component pointed at the training website. No better than before. disabled hyper threading. The Atom CPU is split into two virtual cores, and the AIR app was only able to use one thread so maxed out at 50% CPU usage. Since the kiosk will only run the AIR app I am happy to loose hyper threading to increase the performance of the Air app. Marginal Improvement. The same website with the same videos is responsive if viewed in ie7 on the same machine, although Internet Explorer takes advantage of the CPU’s hyper threading. The flash videos are built with Adobe Captivate and from what I understand employee JavaScript to relay results back to the server. I will add more information about the video content asap as the training guru is back in the office later this week.

    Read the article

  • html5 vs flash - full comparison chart anywhere?

    - by iddqd
    So since Steve Jobs said Flash sucks and implied that HTML5 can do everything Flash can without the need for a Plugin, I keep hearing those exact words from a lot of People. I would really like to have a Chart somewhere (similar to http://en.wikipedia.org/wiki/Comparison_of_layout_engines_%28HTML5%29#Form_elements_and_attributes ) that I can just show to those people. Showing all the little things that Flash can do right now, that HTML5/Ajax/CSS is not yet even thinking about. But of course also the things that HTML5 does better. I would like to see details compared like audio playback, realtime audio processing, byte level access, bitmap data manipulation, webcam access, binary sockets, stuff in the works such as P2P technology (adobe stratus) and all the stuff I don't know about myself. Ideally with examples of what can be accomplished with, lets say Binary Sockets (such as a POP3 client) because otherwise it won't mean a lot to non-programmers since they will just say "well we can do without Binary Sockets". And ideally with some current benchmarks and some examples of websites that use this technology. I've searched the web and am surprised not to find anything. So is there such a comparison somewhere? Or does anybody want to create this and post it to Wikipedia? ;-)

    Read the article

  • Trouble creating a Java policy server for a simple Flash app

    - by simonwulf
    I'm trying to create a simple Flash chat application for educational purposes, but I'm stuck trying to send a policy file from my Java server to the Flash app (after several hours of googling with little luck). The policy file request reaches the server that sends a harcoded policy xml back to the app, but the Flash app doesn't seem to react to it at all until it gives me a security sandbox error. I'm loading the policy file using the following code in the client: Security.loadPolicyFile("xmlsocket://myhostname:" + PORT); The server recognizes the request as "<policy-file-request/" and responds by sending the following xml string to the client: public static final String POLICY_XML = "<?xml version=\"1.0\"?>" + "<cross-domain-policy>" + "<allow-access-from domain=\"*\" to-ports=\"*\" />" + "</cross-domain-policy>"; The code used to send it looks like this: try { _dataOut.write(PolicyServer.POLICY_XML + (char)0x00); _dataOut.flush(); System.out.println("Policy sent to client: " + PolicyServer.POLICY_XML); } catch (Exception e) { trace(e); } Did I mess something up with the xml or is there something else I might have overlooked?

    Read the article

  • Flash AS3 load file xml

    - by Elias
    Hello, I'm just trying to load an xml file witch can be anywere in the hdd, this is what I have done to browse it, but later when I'm trying to load the file it would only look in the same path of the swf file here is the code package { import flash.display.Sprite; import flash.events.; import flash.net.; public class cargadorXML extends Sprite { public var cuadro:Sprite = new Sprite(); public var file:FileReference; public var req:URLRequest; public var xml:XML; public var xmlLoader:URLLoader = new URLLoader(); public function cargadorXML() { cuadro.graphics.beginFill(0xFF0000); cuadro.graphics.drawRoundRect(0,0,100,100,10); cuadro.graphics.endFill(); cuadro.addEventListener(MouseEvent.CLICK,browser); addChild(cuadro); } public function browser(e:Event) { file = new FileReference(); file.addEventListener(Event.SELECT,bien); file.browse(); } public function bien(e:Event) { xmlLoader.addEventListener(Event.COMPLETE, loadXML); req=new URLRequest(file.name); xmlLoader.load(req); } public function loadXML(e:Event) { xml=new XML(e.target.data); //xml.name=file.name; trace(xml); } } } when I open a xml file that isnt it the same directory as the swf, it gives me an unfound file error. is there anything I can do? cause for example for mp3 there is an especial class for loading the file, see http://www.flexiblefactory.co.uk/flexible/?p=46 thanks

    Read the article

  • TypeError: Error #1009 - (Null reference error) With Flash.

    - by Wind Chimez
    I am not an expert in flash, but i do work with AS and tweak Flash projects , though not having deep expertise in it. Currently i need to revamp a flash website done by one another guy, and the code base given to me, upon execution is throwing the following error. "--- TypeError: Error #1009: Cannot access a property or method of a null object reference. at NewSite_fla::MainTimeline/__setProp_ContactOutP1_ContactOut_Contents_0() at NewSite_fla::MainTimeline/frame1() --" The structure of the project is like, it has the different sections split into different movie clips. There is no single main timeline, but click actions on different areas of seperate movie clips will take them between one another. All the AS logic of event handling are written inline in FLA , no seperate Document class exists. Preloader Movie clip is the first one getting loaded. As i understood the error is getting thrown initially itself, and it is not happening due to any Action script logic written inline, because it is throwing error even before hitting the first inline AS code. I am not able to figure Out what exactly it causing the problem, or where to resolve it. I really got stuck at this point. Any help will be great.I had not seen the particular solution i am looking for anywhere yet, though Error #1009 is common. I uploaded the fla, here ( http://tinyurl.com/249e95p ), for reference.It may not be working , since the included/refered video files and all are not there, but reviwing the action code/movie clips will be possible. Please let me know if somebody will be able to trace the issue exactly.

    Read the article

  • Using Flash comboboxes with Jaws

    - by Zoe Gagnon
    I'm working on a project for a government agency which requires 508 compliance. Our product is written for Flash 10 in ActionScript 3 using Flash CS4. We are doing this 100% programatically. We have almost all of the elements working properly, but when accessing combobox components, we have a problem. The combobox can be tabbed to directly with no problem, and the drop-down can be navigated directly with the arrow keys. However, when navigating, it reads the last item in the dropdown, not the current. For example, consider a combobox with the list of selections: first, second. Jaws reads the prompt fine, but when we press the down arrow to select the first item, it reads nothing. Pressing the down arrow again (so "second" is selected) causes it to read "first". Pressing down a final time causes it to read "second". I am completely baffled by this, and it is probably as likely that we don't know how to use Jaws, or that Flash simply can't support this function properly. If you have any suggestions for how we can resolve this, I would really appreciate it.

    Read the article

  • Overlapping 2 Flash objects and controlling z-index

    - by Magnus Smith
    I have two Flash objects on a webpage (call them A and B), and they overlap so one partially obscures the other. I don't seem to have any control over the z-index, to force B in front of A. Whatever I try, A always 'wins' and stays on the top! I have read many people's posts about problem with getting HTML to show over the top of Flash...but nothing about when your two overlapping items are both Flash objects. I have tried various combinations of wmode=opaque/transparent/window I have tried CSS position:absolute/relative and z-index:0/999 I have tried placing the HTML sections in a different order The problem is the same in IE6 and Firefox 2.0 I do not want to use jQuery in this case In my particular situation B must have position:absolute and wmode=transparent, and sit above A. A needs relative positioning and transparency is not required. However, I have been testing without these restrictions, and I still have no control over the overlap. Are some SWFs (ours are adverts sent by clients) created in such a way as to override any code control of z-index? Thanks for any advice you can give.

    Read the article

  • Loading Flash / Flex Applications in another AIR Application

    - by t.stamm
    Hi there, i'm loading a Flex Application in my AIR App and i'm using the childSandboxBridge and parentSandboxBridge to communicate between those two. Works like a charm. But i've tried to load a Flash Application (the main Class extends Sprite, not Application) and therefore i get a SecurityError when trying to set the childSandboxBridge on the loaderInfo object. In the Flex app it's like this: I'm casting the loaderInfo since the childSandboxBridge Property is only available in AIR. loaderInfo = FlexGlobals.topLevelApplication.systemManager.loaderInfo; try { Object(loaderInfo).childSandboxBridge = this; } catch(e:Error) { ... } In my Flash app it's like this: loaderInfo = myMainObject.loaderInfo; // myMainObject is the same class as 'root' try { Object(loaderInfo).childSandboxBridge = this; } catch(e:Error) { ... } In the below example i get the following SecurityError: Error #3206: Caller app:/airapp.swf/[[DYNAMIC]]/1 cannot set LoaderInfo property childSandboxBridge. The SecuritySandbox for both examples is 'application'. Any ideas why it doesn't work with the Flash app? Thanks in advance.

    Read the article

  • FileReference.browse() stops playback on some Flash Players

    - by Christophe Herreman
    We have an issue were the server session associated with a Flex client times out when the browse file dialog is open for a time longer then the configured session timeout. It seems that on some players, the playback is stopped when browse or download on a FileReference is executing. This also causes remote calls to be blocked and hence our manual keep-alive messages are not sent to the server, resulting in a session timeout. I searched for some info on this in the docs and found a notice of it, but it does not explicitly list the players it does (not) work. Would anyone know were I could find a complete list? PS: here are the links that mention this behavior: http://livedocs.adobe.com/flex/3/html/help.html?content=17_Networking_and_communications_7.html While calls to the FileReference.browse(), FileReferenceList.browse(), or FileReference.download() method are executing, most players will continue SWF file playback. http://livedocs.adobe.com/flex/3/langref/flash/net/FileReference.html While calls to the FileReference.browse(), FileReferenceList.browse(), or FileReference.download() methods are executing, SWF file playback pauses in stand-alone and external versions of Flash Player and in AIR for Linux and Mac OS X 10.1 and earlier Anyone knows what is meant with an "external Flash Player"? PPS: we tested this on Linux (10.0.x and 10.1.x) in Firefox where it seems to stop playback and on Windows (10.0.x) in IE where playback seems to continue.

    Read the article

  • HTML5 or Flash?

    - by lewiguez
    I have to write a web application for a client soon. Looking at the specs, there is no reason why the project couldn't be an HTML5/CSS/Javascript project, but the client is arguing that it has to be Flash. The project has a number of dynamic elements and is web-based. It'll only be used in-house by a small number of people and all of those people use either Google Chrome or Safari 4. They are all pretty tech-savvy to boot. My question is this: what are some of the reasons (preferably technical since this is Stack Overflow) I can present to my client as to why HTML5 is better than Flash (that's assuming I'm right and it is in this case)? Is it OK to use HTML5 even though it's still a draft spec (I'm assuming it is after checking out all those Apple HTML5 demos a few days ago)? Also, would a hybrid approach be preferable for now? Something that uses Flash wherever the canvas object would've been used in the HTML5 approach and that conforms to a normal XHTML approach. Help!

    Read the article

  • How can flash call jquery function in its event

    - by user2955639
    I want jquery to do something during some of the events when an audio is playing. So I'm coding a function like this <script> $.fn.playMedia = function(options){ var opts = $.extend({}, { swfSrc: '' timeUpdated: function(currentTime){}, startPlay: function(){}, endPlay: function(){} }, options); return $(this).each(function(){ // call flash to play the media whose src is opts.swfSrc // Is it possible that flash can call the js functions(opts.timeUpdate, opts.startPlay and opts.endPlay) at each time of the event is triggered? }); }}; </script> // Usage <div id="player"></div> <script> $('#player').playMedia({ swfSrc: '/path/song.mp3', timeUpdated: function(currentTime){ comsole.log(currentTime); } }); </script> I'm a totally layman of flash, I just guess this works. Hope someone could tell me how to make up a swf file for this jquery function. Or is there any existing jquery plugin which does this thing but can re-design apperance flexibly. Thank you very much!

    Read the article

  • Flash automatically names objects on stage "instance#"

    - by meowMIX3R
    Hi, I have 2 TLF text boxes already placed on my main stage. In the property inspector window I give these the instance names: "txt1" and "txt2". I am trying to have a single mouseup event, and figure out which text box it occurred on. My document class has the following code: package { import flash.display.Sprite; import flash.events.KeyboardEvent; public class SingleEvent extends Sprite{ public function SingleEvent() { // constructor code root.addEventListener(KeyboardEvent.KEY_UP, textChanged,false,0,true); } private function textChanged(e:KeyboardEvent){ trace(e.target.name); trace(" " + e.target); switch(e.target){ case txt1: trace("txt1 is active"); break; case txt2: trace("txt2 is active"); break; default: break; } } } } Example output is: instance15 [object Sprite] instance21 [object Sprite] Since the objects are already on the stage, I am not sure how to get flash to recognize them as "txt1" and "txt2" instead of "instance#". I tried setting the .name property, but it had no effect. In the publish settings, I have "Automatically declare stage instances" checked. Also, is it possible to have a single change event for multiple slider components? The following never fires: root.addEventListener(SliderEvent.CHANGE, sliderChanged,false,0,true); Thanks for any tips

    Read the article

  • Varnish does not recognize req.hash

    - by Yogesh
    I have Varnish 3.0.2 on Redhat and service varnish start fails after I added vcl_hash section. I did varnishd and then loaded the vcl using vcl.load vcl.load default default.vcl Message from VCC-compiler: Unknown variable 'req.hash' At: ('input' Line 24 Pos 9) set req.hash += req.url; --------########------------ Running VCC-compiler failed, exit 1 cat default.vcl backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { if( req.url ~ "\.(css|js|jpg|jpeg|png|swf|ico|gif|jsp)$" ) { unset req.http.cookie; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if( req.httpCookie == "JSESSIONID" ) { set req.http.X-Varnish-Hashed-On = regsub( req.http.Cookie, "^.*?JSESSIONID=([a-zA-z0-9]{32}\.[a-zA-Z0-9]+)([\s$\n])*.*?$", "\1" ); set req.hash += req.http.X-Varnish-Hashed-On; } return(hash); } What could be wrong?

    Read the article

  • Huge AS 3.0 Export Frame

    - by goorj
    I am creating a very simple flash animation with no code or complex effects, just text and simple tweens (Flash CS5) But I have problems reducing the size of my swf. From the generated size report, it looks like it has to do with fonts and/or exported actionscript classes. The frame with AS 3.0 Classes is over 100K, and even though I am only using one fonts, the same characters are embedded/exported multiple times My questions are: Do embedding of mixed TLF/Classic text (or mixing other text properties, spacing/kerning etc) require the same characters to be embedded twice? Do text transformations on TLF text (rotation and different transformations not available in classic text) require embedding of ("internal") AS3 classes that will increase the size of the .swf? (even though none of these classes are explicitly used by me, there are no scripts in the fla project) I have tried removing all the text instances one by one, and at one point, the swf is reduced to only 5-6K, but I am not able to pinpoint exactly what causes the ballooning of the swf

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Find a Hash Collision, Win $100

    - by Mike C
    Margarity Kerns recently published a very nice article at SQL Server Central on using hash functions to detect changes in rows during the data warehouse load ETL process. On the discussion page for the article I noticed a lot of the same old arguments against using hash functions to detect change. After having this same discussion several times over the past several months in public and private forums, I've decided to see if we can't put this argument to rest for a while. To that end I'm going to...(read more)

    Read the article

  • Problem with WebUpd8 PPA: Hash Sum mismatch

    - by jiewmeng
    I keep getting W: Failed to fetch bzip2:/var/lib/apt/lists/partial/ppa.launchpad.net_webupd8team_gnome3_ubuntu_dists_oneiric_main_binary-i386_Packages Hash Sum mismatch W: Failed to fetch http://ppa.launchpad.net/webupd8team/gnome3/ubuntu/dists/oneiric/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/ppa.launchpad.net_webupd8team_gnome3_ubuntu_dists_oneiric_main_i18n_Index E: Some index files failed to download. They have been ignored, or old ones used instead. How might I fix this? I tried deleting the files in /var/lib/apt/lists/partial already ... still doesnt work ...

    Read the article

  • DIY $2 Flash Diffuser [Video]

    - by Jason Fitzpatrick
    This simple flash diffuser is cheap to make, easy to assemble, and offers a wide surface area to bounce your flash. The video comes to us courtesy of StepByStep Photography, building off a design by Chuck Gardner (follow the link below to read Chuck’s full tutorial and learn a bit about using a bounce flash). DIY Diffuser for Hot-Shoe Flash [via DIY Photography] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Oracle Support Customers take note My Oracle Support Flash is set to Retire

    - by user12244613
    Take Action – My Oracle Support Flash User Interface Set to Retire On July 13, 2012, Oracle plans to upgrade the HTML interface with additional functionality that will allow those users still remaining on the Flash-based interface to switch over to the HTML version. Although the Flash-based user interface will remain available for a brief period following the upgrade, we encourage you to begin using the new HTML version sooner. Find out when you should make the switch! Read complete communication to Flash users

    Read the article

  • Hash Sum mismatch on python-keyring

    - by Gearoid Murphy
    I came in to my workstation this morning to find an apt error notification relating to a hash sum mismatch on the python keyring password storage mechanism, given the sensitive nature of this package, this gives me some cause for concern. Has anyone else seen this error?, how can I ensure that my system has not been compromised? Failed to fetch http://gb.archive.ubuntu.com/ubuntu/pool/main/p/python-keyring/python-keyring_0.9.2-0ubuntu0.12.04.2_all.deb Hash Sum mismatch Xubuntu 11.04 AMD64

    Read the article

  • Flash ActiveX: How to Load Movie from memory or resource or stream?

    - by yuku
    I'm embedding a Flash ActiveX control in my C++ app (Flash.ocx, Flash10a.ocx, etc depending on your Flash version). I can load an SWF file by calling LoadMovie(0, filename), but the file needs to physically reside in the disk. How to load the SWF from memory (or resource, or stream)? I'm sure there must be a way, because commercial solutions like f-in-box's feature Load flash movies from memory directly also uses Flash ActiveX control.

    Read the article

  • flashplugin installer works for firefox but not chromium-browser

    - by user829755
    I'm on Ubuntu precise (12.04.5 LTS) and I do have flashplugin-installer installed. When I click on flash videos in chromium-browser I get "An error occurred, please try again later". In firefox they display fine. I opened chrome://plugins to see if the flash plugin is installed and indeed it's not listed. Also http://www.codegeek.net/flash-version.php displays: "You do not have Flash player installed", while with firefox it says: You have Flash player 11.2.202 installed. Following some instructions on some ubuntu page I copied the plugin using sudo cp /usr/lib/flashplugin-installer/libflashplayer.so /usr/lib/chromium-browser/plugins/ and I started chromium using chromium-browser --enable-plugins but this didn't help. chrome://chrome/ says: Version 36.0.1985.125 Ubuntu 12.04 (283153). what else can I try?

    Read the article

  • Can't see YouTube videos in either Firefox or Chromium

    - by williepabon
    Dont' know why or if it is related, after upgrading Ubuntu 10.04 kernel ( I guess from 2.6.32-38 to -42) I can's see youtube videos. I checked the flash plug-ins installed and I have: (1) flashplugin-installer, (2) flashplugin-nonfree and (3) flashplugin-nonfree-extrasound. I haven't reinstalled the browsers yet. Here's the code of the plugins installed: williepabon@raquel-desktop:~$ sudo lsb_release -a; uname -a; dpkg -l | egrep 'flash|gnash|swf|spark' [sudo] password for williepabon: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.4 LTS Release: 10.04 Codename: lucid Linux raquel-desktop 2.6.32-42-generic #95-Ubuntu SMP Wed Jul 25 15:57:54 UTC 2012 i686 GNU/Linux ii flashplugin-installer 11.2.202.238ubuntu0.10.04.1 Adobe Flash Player plugin installer ii flashplugin-nonfree 11.2.202.238ubuntu0.10.04.1 Adobe Flash Player plugin installer (transit ii flashplugin-nonfree-extrasound 0.0.svn2431-3 Adobe Flash Player platform support library Any ideas to solve this issue? Thanks.

    Read the article

  • is requiring a video player download acceptable

    - by wantTheBest
    Our site currently is going to require our users to download a player to view videos they will want to view on our site. The videos get uploaded by users from various sources (smartphones in 3gp format for example). However most people have Flash on their machines. I am trying to 'make a gentle stand' and tell the team that requiring a download of a video player is not acceptable. My thinking is this: instead of allowing people to upload 3gp and other formats then re-serving the exact format on REQUESTs from our site's users we will instead use a video converter such as FFMpeg to convert every uploaded video to FLV for viewing on flash. so when a user requests to view one of the videos on our site -- boom they probably already have Flash installed so we just play the video in their Flash player. I feel serving up FLV flash video is best. Does it ring true that requiring, say, a 3gp player download just to view a video is the wrong approach?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >