Search Results

Search found 9217 results on 369 pages for 'cross apply'.

Page 63/369 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Mac app to apply certain settings based on selected location (profile)?

    - by Jonathan
    When my MacBook is connected to my Thunderbolt Display, I want WiFi to be turned off, Bluetooth to be turned on and the screen brightness to be at max. On the road I want WiFi to be turned on, Bluetooth to be turned off and the screen brightness to be somewhere in the middle. It would be easy if there was an application that would let me create several profiles (like 'Home' and 'On the road'), which I could easily select (from Dashboard or the menu bar). The profile settings would then be applied instantly. Does an application like this exist?

    Read the article

  • Apply rewrite rule for all but all the files (recursive) in a subdirectory?

    - by user784637
    I have an .htaccess file in the root of the website that looks like this RewriteRule ^some-blog-post-title/ http://website/read/flowers/a-new-title-for-this-post/ [R=301,L] RewriteRule ^some-blog-post-title2/ http://website/read/flowers/a-new-title-for-this-post2/ [R=301,L] <IfModule mod_rewrite.c> RewriteEngine On ## Redirects for all pages except for files in wp-content to website/read RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !/wp-content RewriteRule ^(.*)$ http://website/read/$1 [L,QSA] #RewriteRule ^http://website/read [R=301,L] RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> My intent is to redirect people to the new blog post location if they propose one of those special blog posts. If that's not the case then they should be redirected to http://website.com/read. Nothing from http://website.com/wp-content/* should be redirected. So far conditions 1 and 3 are being met. How can I meet condition 2?

    Read the article

  • WPF: How to data bind an object to an element and apply a data template through code?

    - by Boris
    I have created a class LeagueMatch. public class LeagueMatch { public string Home { get { return home; } set { home = value; } } public string Visitor { get { return visitor; } set { visitor = value; } } private string home; private string visitor; public LeagueMatch() { } } Next, I have defined a datatemplate resource for LeagueMatch objects in XAML: <DataTemplate x:Key="MatchTemplate" DataType="{x:Type entities:LeagueMatch}"> <Grid Name="MatchGrid" Width="140" Height="50" Margin="5"> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="*" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <TextBlock Grid.Row="0" Text="{Binding Home}" /> <TextBlock Grid.Row="1" Text="- vs -" /> <TextBlock Grid.Row="2" Text="{Binding Visitor}" /> </Grid> </DataTemplate> In the XAML code-behind, I want to create a ContentPresenter object and to set it's data binding to a previously initialized LeagueMatch object and apply the defined data template. How is that supposed to be done? Thanks.

    Read the article

  • Apply a recursive CTE on grouped table rows (SQL server 2005).

    - by Evan V.
    Hi all, I have a table (ROOMUSAGE) containing the times people check in and out of rooms grouped by PERSONKEY and ROOMKEY. It looks like this: PERSONKEY | ROOMKEY | CHECKIN | CHECKOUT | ROW ---------------------------------------------------------------- 1 | 8 | 13-4-2010 10:00 | 13-4-2010 11:00 | 1 1 | 8 | 13-4-2010 08:00 | 13-4-2010 09:00 | 2 1 | 1 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 1 | 1 | 13-4-2010 14:00 | 13-4-2010 15:00 | 2 1 | 1 | 13-4-2010 13:00 | 13-4-2010 14:00 | 3 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 2 I want to select just the consecutive rows for each PERSONKEY, ROOMKEY grouping. So the desired resulting table is: PERSONKEY | ROOMKEY | CHECKIN | CHECKOUT | ROW ---------------------------------------------------------------- 1 | 8 | 13-4-2010 10:00 | 13-4-2010 11:00 | 1 1 | 1 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 1 | 1 | 13-4-2010 14:00 | 13-4-2010 15:00 | 2 1 | 1 | 13-4-2010 13:00 | 13-4-2010 14:00 | 3 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 I want to avoid using cursors so I thought I would use a recursive CTE. Here is what I came up with: ;with CTE (PERSONKEY, ROOMKEY, CHECKIN, CHECKOUT, ROW) as (select RU.PERSONKEY, RU.ROOMKEY, RU.CHECKIN, RU.CHECKOUT, RU.ROW from ROOMUSAGE RU where RU.ROW = 1 union all select RU.PERSONKEY, RU.ROOMKEY, RU.CHECKIN, RU.CHECKOUT, RU.ROW from ROOMUSAGE RU inner join CTE on RU.ROWNUM = CTE.ROWNUM + 1 where CTE.CHECKIN = RU.CHECKOUT and CTE.PERSONKEY = RU.PERSONKEY and CTE.ROOMKEY = RU.ROOMKEY) This worked OK for very small datasets (under 100 records) but it's unusable on large datasets. I'm thinking that I should somehow apply the cte recursevely on each PERSONKEY, ROOMKEY grouping on my ROOMUSAGE table but I am not sure how to do that. Any help would be much appreciated, Cheers!

    Read the article

  • JQuery UI Tabs: Apply opacity toggle to only specific inner element?

    - by Kerri
    I am using Jquery UI tabs, and have it set to toggle the opacity with each slide change. I'm wondering if there's a way to apply the opacity toggle to only a single element within each tab, instead of the entire tab. My understanding of jQuery is pretty basic, so bear with me. So, If I have something like this: <div id="tabs"> <ul id="tabs-nav><li></li></ul> <div id="tab-1"> <img /> <p /> </div> <div id="tab-2"> <img /> <p /> </div> ...etc </div> How could I set it so that only the <img> has an effect applied, and the rest just switches normally? Here are the basics of the call I have for UI tabs: var $tabs = $('#slides').tabs({fx: { opacity: 'toggle' } }); $(".ui-tabs-panel").each(function(i){ //stuff to create previous/next links }); $('.next-tab, .prev-tab').click(function() { $tabs.tabs('select', $(this).attr("rel")); return false; });

    Read the article

  • How can you handle cross-cutting conerns in JAX-WS without Spring or AOP? Handlers?

    - by LES2
    I do have something more specific in mind, however: Each web service method needs to be wrapped with some boiler place code (cross cutting concern, yes, spring AOP would work great here but it either doesn't work or unapproved by gov't architecture group). A simple service call is as follows: @WebMethod... public Foo performFoo(...) { Object result = null; Object something = blah; try { soil(something); result = handlePerformFoo(...); } catch(Exception e) { throw translateException(e); } finally { wash(something); } return result; } protected abstract Foo handlePerformFoo(...); (I hope that's enough context). Basically, I would like a hook (that was in the same thread - like a method invocation interceptor) that could have a before() and after() that could could soil(something) and wash(something) around the method call for every freaking WebMethod. Can't use Spring AOP because my web services are not Spring managed beans :( HELP!!!!! Give advice! Please don't let me copy-paste that boiler plate 1 billion times (as I've been instructed to do). Regards, LES

    Read the article

  • How do I load an XML document, add and remove nodes, then apply it to a ASP DataGrid control?

    - by JFOX
    I have a pretty simple operation but am struggling with how to implement it. I am loading XML from an external data source using a DataSet.ReadXml(), the creating a new XMLDataDocument from that data set, then syncing the Dataset back to the XMLDataDocument like so: doc = new XmlDataDocument(dsDataSet); dsDataSet.EnforceConstraints = false; dsDataSet= doc.DataSet; Once loaded I do two things to the XmlDataDocument: Loop through and check if a purely meta node, count, exists right beneath the root node and if so remove it. a thumb node exists in a second level nodelist and if not, create and append it. This is all going a expected because the result of doc.save() looks correct. Where I'm having an issue is updating the Dataset, which is being applied as the data source for an ASP DataGrid. Once all the above XMLDoc manipaulation is done I do this: dsDataSet.Merge(doc.DataSet); dsDataSet.AcceptChanges(); I then apply the data set to the grid control: dgList.DataSource = dsDataSet; dgList.DataBind(); But, when I do this I get this error on the site: System.Web.HttpException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'thumb'. What did I miss?

    Read the article

  • GCC ICE -- alternative function syntax, variadic templates and tuples

    - by Marc H.
    (Related to C++0x, How do I expand a tuple into variadic template function arguments?.) The following code (see below) is taken from this discussion. The objective is to apply a function to a tuple. I simplified the template parameters and modified the code to allow for a return value of generic type. While the original code compiles fine, when I try to compile the modified code with GCC 4.4.3, g++ -std=c++0x main.cc -o main GCC reports an internal compiler error (ICE) with the following message: main.cc: In function ‘int main()’: main.cc:53: internal compiler error: in tsubst_copy, at cp/pt.c:10077 Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.4/README.Bugs> for instructions. Question: Is the code correct? or is the ICE triggered by illegal code? // file: main.cc #include <tuple> // Recursive case template<unsigned int N> struct Apply_aux { template<typename F, typename T, typename... X> static auto apply(F f, const T& t, X... x) -> decltype(Apply_aux<N-1>::apply(f, t, std::get<N-1>(t), x...)) { return Apply_aux<N-1>::apply(f, t, std::get<N-1>(t), x...); } }; // Terminal case template<> struct Apply_aux<0> { template<typename F, typename T, typename... X> static auto apply(F f, const T&, X... x) -> decltype(f(x...)) { return f(x...); } }; // Actual apply function template<typename F, typename T> auto apply(F f, const T& t) -> decltype(Apply_aux<std::tuple_size<T>::value>::apply(f, t)) { return Apply_aux<std::tuple_size<T>::value>::apply(f, t); } // Testing #include <string> #include <iostream> int f(int p1, double p2, std::string p3) { std::cout << "int=" << p1 << ", double=" << p2 << ", string=" << p3 << std::endl; return 1; } int g(int p1, std::string p2) { std::cout << "int=" << p1 << ", string=" << p2 << std::endl; return 2; } int main() { std::tuple<int, double, char const*> tup(1, 2.0, "xxx"); std::cout << apply(f, tup) << std::endl; std::cout << apply(g, std::make_tuple(4, "yyy")) << std::endl; } Remark: If I hardcode the return type in the recursive case (see code), then everything is fine. That is, substituting this snippet for the recursive case does not trigger the ICE: // Recursive case (hardcoded return type) template<unsigned int N> struct Apply_aux { template<typename F, typename T, typename... X> static int apply(F f, const T& t, X... x) { return Apply_aux<N-1>::apply(f, t, std::get<N-1>(t), x...); } }; Alas, this is an incomplete solution to the original problem.

    Read the article

  • Is there a way to make changes to toggles in my .emacs file apply without re-starting Emacs?

    - by Vivi
    I want to be able to make the changes to my .emacs file without having to reload Emacs. I found three questions which sort of answer what I am asking (you can find them here, here and here), but the problem is that the change I have just made is to a toggle, and as the comments to two of the answers (a1, a2) to those questions explain, the solutions given there (such as M-x reload-file or M-x eval-buffer) don't apply to toggles. I imagine there is a way of toggling the variable again with a command, but if there is a way to reload the whole .emacs and have the all the toggles re-evaluated without having to specify them, I would prefer. In any case, I would also appreciate if someone told me how to toggle the value of a variable so that if I just changed one toggle I can do it with a command rather than re-start Emacs just for that (I am new to Emacs). I don't know how useful this information is, but the change I applied was the following (which I got from this answer to another question): (setq skeleton-pair t) (setq skeleton-pair-on-word t) (global-set-key (kbd "[") 'skeleton-pair-insert-maybe) (global-set-key (kbd "(") 'skeleton-pair-insert-maybe) (global-set-key (kbd "{") 'skeleton-pair-insert-maybe) (global-set-key (kbd "<") 'skeleton-pair-insert-maybe) Edit: I included the above in .emacs and reloaded Emacs, so that the changes took effect. Then I commented all of it out and tried M-x load-file. This doesn't work. The suggestion below (C-x C-e by PP works if I am using it to evaluate the toggle first time, but not when I want to undo it). I would like something that would evaluate the commenting out, if such thing exists... Thanks :)

    Read the article

  • Cross browser's probelm to highlight option item as bold in form element "select".

    - by Vivek
    Hello All , I am facing one weird cross browsers problem i.e. I want to highlight some of the option items as bold by using CSS class in my form element "select". This all is working fine in firefox only but not in other browsers like safari , chrome and IE .Given below is the code. <html> <head> <title>MAke Heading Bold</title> <style type="text/css"> .mycss {font-weight:bold;} </style> </head> <body> <form name="myform"> <select name="myselect"> <option value="one">one</option> <option value="two" class="mycss">two</option> <option value="three" >Three </option> </select> </form> </body> </html> Please suggest me best possible solution for this . Thanks Vivek

    Read the article

  • Good practice to create extension methods that apply to System.Object?

    - by Christian
    Hello, I'm wondering whether I should create extension methods that apply on the object level or whether they should be located at a lower point in the class hierarchy. What I mean is something along the lines of: public static string SafeToString(this Object o) { if (o == null || o is System.DBNull) return ""; else { if (o is string) return (string)o; else return ""; } } public static int SafeToInt(this Object o) { if (o == null || o is System.DBNull) return 0; else { if (o.IsNumeric()) return Convert.ToInt32(o); else return 0; } } //same for double.. etc I wrote those methods since I have to deal a lot with database data (From the OleDbDataReader) that can be null (shouldn't, though) since the underlying database is unfortunately very liberal with columns that may be null. And to make my life a little easier, I came up with those extension methods. What I'd like to know is whether this is good style, acceptable style or bad style. I kinda have my worries about it since it kinda "pollutes" the Object-class. Thank you in advance & Best Regards :) Christian P.S. I didn't tag it as "subjective" intentionally.

    Read the article

  • Cross-thread operation not valid: accessed from a thread other than the thread it was created on.

    - by user307524
    Hi, I want to remove checked items from checklistbox (winform control) in class file method which i am calling asynchronously using deletegate. but it showing me this error message:- Cross-thread operation not valid: Control 'checkedListBox1' accessed from a thread other than the thread it was created on. i have tried invoke required but again got the same error. Sample code is below: private void button1_Click(object sender, EventArgs e) { // Create an instance of the test class. Class1 ad = new Class1(); // Create the delegate. AsyncMethodCaller1 caller = new AsyncMethodCaller1(ad.TestMethod1); //callback delegate IAsyncResult result = caller.BeginInvoke(checkedListBox1, new AsyncCallback(CallbackMethod)," "); } In class file code for TestMethod1 is : - private delegate void dlgInvoke(CheckedListBox c, Int32 str); private void Invoke(CheckedListBox c, Int32 str) { if (c.InvokeRequired) { c.Invoke(new dlgInvoke(Invoke), c, str); c.Items.RemoveAt(str); } else { c.Text = ""; } } // The method to be executed asynchronously. public string TestMethod1(CheckedListBox chklist) { for (int i = 0; i < 10; i++) { string chkValue = chklist.CheckedItems[i].ToString(); //do some other database operation based on checked items. Int32 index = chklist.FindString(chkValue); Invoke(chklist, index); } return ""; }

    Read the article

  • how can we apply client side validation on fileupload control in ASP.NET to check filename contain s

    - by subodh
    I am working on ASP.NET3.5 platform. I have used a file upload control and a asp button to upload a file. Whenever i try to upload a file which contain special characterlike (file#&%.txt) it show crash and give the messeage Server Error in 'myapplication' Application. A potentially dangerous Request.Files value was detected from the client (filename="...\New Text &#.txt"). Description: Request Validation has detected a potentially dangerous client input value, and processing of the request has been aborted. This value may indicate an attempt to compromise the security of your application, such as a cross-site scripting attack. You can disable request validation by setting validateRequest=false in the Page directive or in the configuration section. However, it is strongly recommended that your application explicitly check all inputs in this case. Exception Details: System.Web.HttpRequestValidationException: A potentially dangerous Request.Files value was detected from the client (filename="...\New Text &#.txt"). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. how can i prevent this crash using javascript at client side?

    Read the article

  • What is preferred strategies for cross browser and multiple styled table in CSS?

    - by jitendra
    What is preferred strategies for cross browser and multiple styled table in CSS? in default css what should i predefined for <table>, td, th , thead, tbody, tfoot I have to work in a project there are so many tables with different color schemes and different type of alignment like in some table , i will need to horizontally align data of cell to right, sometime left, sometime right. same thing for vertical alignment, top, bottom and middle. some table will have thin border on row , some will have thick (same with column border). Some time i want to give different background color to particular row or column or in multiple row or column. So my question is: What code should i keep in css default for all tables and how to handle table with different style using ID and classes in multiple pages. I want to do every presentational thing with css. How to make ID classes for everything using semantic naming ? Which tags related to table can be useful? How to control whole tables styling from one css class?

    Read the article

  • In Mercurial, can I apply changes from one file to another file in the same branch?

    - by Stephen
    In the good old days of Subversion, I would sometimes derive a new file from an existing one using svn copy. Then if something changed in sections they had in common, I could still use svn merge to update the derived version. To use the example from hginit.com, say the "guac" recipe already exists, and I want to create a "superguac" that includes instructions on how to serve guacamole to 1000 raving soccer fans. Using the process I just described, I could: svn cp guac superguac svn ci -m "Created superguac by copying guac" (edit superguac) svn ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) svn ci -m "Fixed a typo in guac" svn merge -r3:4 guac superguac and thus the typo fix would be applied to superguac. Mercurial provides an hg copy command that marks a file as a copy of the original, but I'm not sure the repository structure supports a similar workflow. Here's the same example, and I carefully only edit a single file in the commit I want to use in the merge: hg cp guac superguac hg ci -m "Created superguac by copying guac" (edit superguac) hg ci -m "Added instructions for serving 1000 raving soccer fans to superguac" (edit guac) hg ci -m "Fixed a typo in guac" I now want to apply the change in guac to superguac. Is that possible? If so, what's the right command? Is there a different workflow in Mercurial that achieves the same results (limited to a single branch)?

    Read the article

  • SIFR, JAVASCRIPT & PHP - How to apply SIFR to dynamically loaded content.

    - by user311039
    Well here's the problem: I have a PHP index page which uses show/hide layers javascript. I am using the on menu.click function to show and hide content relevant to each menu. On.click all divs are hidden except the content for that menu item , which fades in. The content relating to each menu item are displayed within separate DIVS. The property is applied to all the text within all the divs. See: http://jobe-group.com/jobeco/uk/2010live/dynamic/content/index.php# The trouble is that SIFR only appears to be applied to the displayed on.load when the page is first loaded. When this is hidden and the other s shown through the "show" function they load in classic CSS fonts without the SIFR applied. Is this unavoidable with the SIFR setup. Or am I not calling the divs properly. I have set the SIFR to apply to the selector and indeed it works fine on the for the displayed on load. It doesn't work for the within other . In theory I would think its possible to load the SIFR on all divs on page.load even if those divs are presently visibility:hidden. What's the verdict on this? Hope someone can help. Cheers, John

    Read the article

  • JQuery: How to apply css to a .wrap element?

    - by Eamonn
    My code thus far: $(".lorem img:not(.full-img)").each(function() { $(this).addClass("inline-img"); var imgWidth = $(this).prop('width'); var imgHeight = $(this).prop('height'); var imgFloat = $(this).prop('float'); $(this).wrap('<div class="inline-img-wrap" />'); if ($(this).prop("title")!='') { $('<p class="inline-img-caption">').text($(this).prop('title')).insertAfter($(this)); } }); I am searching for images within a body of text, appending their title values as captions underneath, and wrapping both img and p in a div.inline-img-wrap. That works fine. What I need help with is applying the styling of the img to the new wrapping div instead. You can see I have created vars for just this, but cannot get applying them to work. $this will always apply to the image, and <div class="inline-img-wrap" style="width:'+imgWidth+'"> doesn't work. All advice appreciated!

    Read the article

  • How can you exclude a large number of records in a cross db query using LINQ2SQL?

    - by tap
    So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported. What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x = !idList.Contains(x.Id)) clause on the remote query. This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query. Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure? Thanks in advance.

    Read the article

  • Penetration testing with Nikto, unknown results found

    - by heldrida
    I've scanned my new webserver and I'm surprised to find that in the results there's programs that I never installed. This is a fresh new install of Ubuntu 12.04 and just installed Php 5.3, mysql, fail2ban, apache2, git, a few other things. Not sure if related, but I've got Wordpress installed but this doesn't have anything to do with myphpnuke does it? I'd like to understand why am I getting this results ? + OSVDB-27071: /phpimageview.php?pic=javascript:alert(8754): PHP Image View 1.0 is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=search&query=[script]alert('Vulnerable);[/script]?query=: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=MostPopular&ratenum=[script]alert(document.cookie);[/script]&ratetype=percent: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?op=modload&name=FAQ&file=index&myfaq=yes&id_cat=1&categories=%3Cimg%20src=javascript:alert(9456);%3E&parent_id=0: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?letter=%22%3E%3Cimg%20src=javascript:alert(document.cookie);%3E&op=modload&name=Members_List&file=index: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-4598: /members.asp?SF=%22;}alert('Vulnerable');function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-2946: /forum_members.asp?find=%22;}alert(9823);function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. Thanks for looking!

    Read the article

  • Cross-thread operation not valid: Control accessed from a thread other than the thread it was create

    - by SilverHorse
    I have a scenario. (Windows Forms, C#, .NET) There is a main form which hosts some user control. The user control does some heavy data operation, such that if I directly call the Usercontrol_Load method the UI become nonresponsive for the duration for load method execution. To overcome this I load data on different thread (trying to change existing code as little as I can) I used a background worker thread which will be loading the data and when done will notify the application that it has done its work. Now came a real problem. All the UI (main form and its child usercontrols) was created on the primary main thread. In the LOAD method of the usercontrol I'm fetching data based on the values of some control (like textbox) on userControl. The pseudocode would look like this: //CODE 1 UserContrl1_LOadDataMethod() { if(textbox1.text=="MyName") <<======this gives exception { //Load data corresponding to "MyName". //Populate a globale variable List<string> which will be binded to grid at some later stage. } } The Exception it gave was Cross-thread operation not valid: Control accessed from a thread other than the thread it was created on. To know more about this I did some googling and a suggestion came up like using the following code //CODE 2 UserContrl1_LOadDataMethod() { if(InvokeRequired) // Line #1 { this.Invoke(new MethodInvoker(UserContrl1_LOadDataMethod)); return; } if(textbox1.text=="MyName") //<<======Now it wont give exception** { //Load data correspondin to "MyName" //Populate a globale variable List<string> which will be binded to grid at some later stage } } BUT BUT BUT... it seems I'm back to square one. The Application again become nonresponsive. It seems to be due to the execution of line #1 if condition. The loading task is again done by the parent thread and not the third that I spawned. I don't know whether I perceived this right or wrong. I'm new to threading. How do I resolve this and also what is the effect of execution of Line#1 if block? The situation is this: I want to load data into a global variable based on the value of a control. I don't want to change the value of a control from the child thread. I'm not going to do it ever from a child thread. So only accessing the value so that the corresponding data can be fetched from the database.

    Read the article

  • How can I get penetration depth from Minkowski Portal Refinement / Xenocollide?

    - by Raven Dreamer
    I recently got an implementation of Minkowski Portal Refinement (MPR) successfully detecting collision. Even better, my implementation returns a good estimate (local minimum) direction for the minimum penetration depth. So I took a stab at adjusting the algorithm to return the penetration depth in an arbitrary direction, and was modestly successful - my altered method works splendidly for face-edge collision resolution! What it doesn't currently do, is correctly provide the minimum penetration depth for edge-edge scenarios, such as the case on the right: What I perceive to be happening, is that my current method returns the minimum penetration depth to the nearest vertex - which works fine when the collision is actually occurring on the plane of that vertex, but not when the collision happens along an edge. Is there a way I can alter my method to return the penetration depth to the point of collision, rather than the nearest vertex? Here's the method that's supposed to return the minimum penetration distance along a specific direction: public static Vector3 CalcMinDistance(List<Vector3> shape1, List<Vector3> shape2, Vector3 dir) { //holding variables Vector3 n = Vector3.zero; Vector3 swap = Vector3.zero; // v0 = center of Minkowski sum v0 = Vector3.zero; // Avoid case where centers overlap -- any direction is fine in this case //if (v0 == Vector3.zero) return Vector3.zero; //always pass in a valid direction. // v1 = support in direction of origin n = -dir; //get the differnce of the minkowski sum Vector3 v11 = GetSupport(shape1, -n); Vector3 v12 = GetSupport(shape2, n); v1 = v12 - v11; //if the support point is not in the direction of the origin if (v1.Dot(n) <= 0) { //Debug.Log("Could find no points this direction"); return Vector3.zero; } // v2 - support perpendicular to v1,v0 n = v1.Cross(v0); if (n == Vector3.zero) { //v1 and v0 are parallel, which means //the direction leads directly to an endpoint n = v1 - v0; //shortest distance is just n //Debug.Log("2 point return"); return n; } //get the new support point Vector3 v21 = GetSupport(shape1, -n); Vector3 v22 = GetSupport(shape2, n); v2 = v22 - v21; if (v2.Dot(n) <= 0) { //can't reach the origin in this direction, ergo, no collision //Debug.Log("Could not reach edge?"); return Vector2.zero; } // Determine whether origin is on + or - side of plane (v1,v0,v2) //tests linesegments v0v1 and v0v2 n = (v1 - v0).Cross(v2 - v0); float dist = n.Dot(v0); // If the origin is on the - side of the plane, reverse the direction of the plane if (dist > 0) { //swap the winding order of v1 and v2 swap = v1; v1 = v2; v2 = swap; //swap the winding order of v11 and v12 swap = v12; v12 = v11; v11 = swap; //swap the winding order of v11 and v12 swap = v22; v22 = v21; v21 = swap; //and swap the plane normal n = -n; } /// // Phase One: Identify a portal while (true) { // Obtain the support point in a direction perpendicular to the existing plane // Note: This point is guaranteed to lie off the plane Vector3 v31 = GetSupport(shape1, -n); Vector3 v32 = GetSupport(shape2, n); v3 = v32 - v31; if (v3.Dot(n) <= 0) { //can't enclose the origin within our tetrahedron //Debug.Log("Could not reach edge after portal?"); return Vector3.zero; } // If origin is outside (v1,v0,v3), then eliminate v2 and loop if (v1.Cross(v3).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v2 = v3; v21 = v31; v22 = v32; n = (v1 - v0).Cross(v3 - v0); continue; } // If origin is outside (v3,v0,v2), then eliminate v1 and loop if (v3.Cross(v2).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v1 = v3; v11 = v31; v12 = v32; n = (v3 - v0).Cross(v2 - v0); continue; } bool hit = false; /// // Phase Two: Refine the portal int phase2 = 0; // We are now inside of a wedge... while (phase2 < 20) { phase2++; // Compute normal of the wedge face n = (v2 - v1).Cross(v3 - v1); n.Normalize(); // Compute distance from origin to wedge face float d = n.Dot(v1); // If the origin is inside the wedge, we have a hit if (d > 0 ) { //Debug.Log("Do plane test here"); float T = n.Dot(v2) / n.Dot(dir); Vector3 pointInPlane = (dir * T); return pointInPlane; } // Find the support point in the direction of the wedge face Vector3 v41 = GetSupport(shape1, -n); Vector3 v42 = GetSupport(shape2, n); v4 = v42 - v41; float delta = (v4 - v3).Dot(n); float separation = -(v4.Dot(n)); if (delta <= kCollideEpsilon || separation >= 0) { //Debug.Log("Non-convergance detected"); //Debug.Log("Do plane test here"); return Vector3.zero; } // Compute the tetrahedron dividing face (v4,v0,v1) float d1 = v4.Cross(v1).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v2) float d2 = v4.Cross(v2).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v3) float d3 = v4.Cross(v3).Dot(v0); if (d1 < 0) { if (d2 < 0) { // Inside d1 & inside d2 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } else { // Inside d1 & outside d2 ==> eliminate v3 v3 = v4; v31 = v41; v32 = v42; } } else { if (d3 < 0) { // Outside d1 & inside d3 ==> eliminate v2 v2 = v4; v21 = v41; v22 = v42; } else { // Outside d1 & outside d3 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } } } return Vector3.zero; } }

    Read the article

  • Exam 70-480 Study Material: Programming in HTML5 with JavaScript and CSS3

    - by Stacy Vicknair
    Here’s a list of sources of information for the different elements that comprise the 70-480 exam: General Resources http://www.w3schools.com (As pointed out in David Pallmann’s blog some of this content is unverified, but it is a decent source of information. For more about when it isn’t decent, see http://www.w3fools.com ) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/ (A guy who did a lot of what I did already, sadly I found this halfway through finishing my resources list. This list is expertly put together so I would recommend checking it out.) http://davidpallmann.blogspot.com/2012/08/microsoft-certification-exam-70-480.html http://pluralsight.com/training/Courses (Yes, this isn’t free, but if you look at the course listing there is an entire section on HTML5, CSS3 and Javascript. You can always try the trial!)   Some of the links I put below will overlap with the other resources above, but I tried to find explanations that looked beneficial to me on links outside those already mentioned.   Test Breakdown Implement and Manipulate Document Structures and Objects (24%) Create the document structure. o This objective may include but is not limited to: structure the UI by using semantic markup, including for search engines and screen readers (Section, Article, Nav, Header, Footer, and Aside); create a layout container in HTML http://www.w3schools.com/html/html5_new_elements.asp   Write code that interacts with UI controls. o This objective may include but is not limited to: programmatically add and modify HTML elements; implement media controls; implement HTML5 canvas and SVG graphics http://www.w3schools.com/html/html5_canvas.asp http://www.w3schools.com/html/html5_svg.asp   Apply styling to HTML elements programmatically. o This objective may include but is not limited to: change the location of an element; apply a transform; show and hide elements   Implement HTML5 APIs. o This objective may include but is not limited to: implement storage APIs, AppCache API, and Geolocation API http://www.w3schools.com/html/html5_geolocation.asp http://www.w3schools.com/html/html5_webstorage.asp http://www.w3schools.com/html/html5_app_cache.asp   Establish the scope of objects and variables. o This objective may include but is not limited to: define the lifetime of variables; keep objects out of the global namespace; use the “this” keyword to reference an object that fired an event; scope variables locally and globally http://robertnyman.com/2008/10/09/explaining-javascript-scope-and-closures/ http://www.quirksmode.org/js/this.html   Create and implement objects and methods. o This objective may include but is not limited to: implement native objects; create custom objects and custom properties for native objects using prototypes and functions; inherit from an object; implement native methods and create custom methods http://www.javascriptkit.com/javatutors/object.shtml http://www.crockford.com/javascript/inheritance.html http://stackoverflow.com/questions/1635116/javascript-class-method-vs-class-prototype-method http://www.javascriptkit.com/javatutors/proto.shtml     Implement Program Flow (25%) Implement program flow. o This objective may include but is not limited to: iterate across collections and array items; manage program decisions by using switch statements, if/then, and operators; evaluate expressions http://www.javascriptkit.com/jsref/looping.shtml http://www.javascriptkit.com/javatutors/varshort.shtml http://www.javascriptkit.com/javatutors/switch.shtml   Raise and handle an event. o This objective may include but is not limited to: handle common events exposed by DOM (OnBlur, OnFocus, OnClick); declare and handle bubbled events; handle an event by using an anonymous function http://dev.w3.org/2006/webapi/DOM-Level-3-Events/html/DOM3-Events.html http://javascript.info/tutorial/bubbling-and-capturing   Implement exception handling. o This objective may include but is not limited to: set and respond to error codes; throw an exception; request for null checks; implement try-catch-finally blocks http://www.javascriptkit.com/javatutors/trycatch.shtml   Implement a callback. o This objective may include but is not limited to: receive messages from the HTML5 WebSocket API; use jQuery to make an AJAX call; wire up an event; implement a callback by using anonymous functions; handle the “this” pointer http://www.w3.org/TR/2011/WD-websockets-20110419/ http://www.html5rocks.com/en/tutorials/websockets/basics/ http://api.jquery.com/jQuery.ajax/   Create a web worker process. o This objective may include but is not limited to: start and stop a web worker; pass data to a web worker; configure timeouts and intervals on the web worker; register an event listener for the web worker; limitations of a web worker https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers http://www.html5rocks.com/en/tutorials/workers/basics/   Access and Secure Data (26%) Validate user input by using HTML5 elements. o This objective may include but is not limited to: choose the appropriate controls based on requirements; implement HTML input types and content attributes (for example, required) to collect user input http://diveintohtml5.info/forms.html   Validate user input by using JavaScript. o This objective may include but is not limited to: evaluate a regular expression to validate the input format; validate that you are getting the right kind of data type by using built-in functions; prevent code injection http://www.regular-expressions.info/javascript.html http://msdn.microsoft.com/en-us/library/66ztdbe6(v=vs.94).aspx https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Operators/typeof http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://stackoverflow.com/questions/942011/how-to-prevent-javascript-injection-attacks-within-user-generated-html   Consume data. o This objective may include but is not limited to: consume JSON and XML data; retrieve data by using web services; load data or get data from other sources by using XMLHTTPRequest http://www.erichynds.com/jquery/working-with-xml-jquery-and-javascript/ http://www.webdevstuff.com/86/javascript-xmlhttprequest-object.html http://www.json.org/ http://stackoverflow.com/questions/4935632/how-to-parse-json-in-javascript   Serialize, deserialize, and transmit data. o This objective may include but is not limited to: binary data; text data (JSON, XML); implement the jQuery serialize method; Form.Submit; parse data; send data by using XMLHTTPRequest; sanitize input by using URI/form encoding http://api.jquery.com/serialize/ http://www.javascript-coder.com/javascript-form/javascript-form-submit.phtml http://stackoverflow.com/questions/327685/is-there-a-way-to-read-binary-data-into-javascript https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/encodeURI     Use CSS3 in Applications (25%) Style HTML text properties. o This objective may include but is not limited to: apply styles to text appearance (color, bold, italics); apply styles to text font (WOFF and @font-face, size); apply styles to text alignment, spacing, and indentation; apply styles to text hyphenation; apply styles for a text drop shadow http://www.w3schools.com/css/css_text.asp http://www.w3schools.com/css/css_font.asp http://nicewebtype.com/notes/2009/10/30/how-to-use-css-font-face/ http://webdesign.about.com/od/beginningcss/p/aacss5text.htm http://www.w3.org/TR/css3-text/ http://www.css3.info/preview/box-shadow/   Style HTML box properties. o This objective may include but is not limited to: apply styles to alter appearance attributes (size, border and rounding border corners, outline, padding, margin); apply styles to alter graphic effects (transparency, opacity, background image, gradients, shadow, clipping); apply styles to establish and change an element’s position (static, relative, absolute, fixed) http://net.tutsplus.com/tutorials/html-css-techniques/10-css3-properties-you-need-to-be-familiar-with/ http://www.w3schools.com/css/css_image_transparency.asp http://www.w3schools.com/cssref/pr_background-image.asp http://ie.microsoft.com/testdrive/graphics/cssgradientbackgroundmaker/default.html http://www.w3.org/TR/CSS21/visufx.html http://www.barelyfitz.com/screencast/html-training/css/positioning/ http://davidwalsh.name/css-fixed-position   Create a flexible content layout. o This objective may include but is not limited to: implement a layout using a flexible box model; implement a layout using multi-column; implement a layout using position floating and exclusions; implement a layout using grid alignment; implement a layout using regions, grouping, and nesting http://www.html5rocks.com/en/tutorials/flexbox/quick/ http://www.css3.info/preview/multi-column-layout/ http://msdn.microsoft.com/en-us/library/ie/hh673558(v=vs.85).aspx http://dev.w3.org/csswg/css3-grid-layout/ http://dev.w3.org/csswg/css3-regions/   Create an animated and adaptive UI. o This objective may include but is not limited to: animate objects by applying CSS transitions; apply 3-D and 2-D transformations; adjust UI based on media queries (device adaptations for output formats, displays, and representations); hide or disable controls http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Find elements by using CSS selectors and jQuery. o This objective may include but is not limited to: choose the correct selector to reference an element; define element, style, and attribute selectors; find elements by using pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Structure a CSS file by using CSS selectors. o This objective may include but is not limited to: reference elements correctly; implement inheritance; override inheritance by using !important; style an element based on pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Technorati Tags: 70-480,CSS3,HTML5,HTML,CSS,JavaScript,Certification

    Read the article

  • SOLVED Install MythTV & 11.10 on Lenovo S12 (Intel atom) with wireless

    - by keepitsimpleengineer
    This is how I installed Ubuntu 11.10 and MythTV client on my Lenovo S12 (Intel Atom) laptop and use it using WiFi (see additional notes at end). I did this because the upgrade from 11.04 bricked the laptop. Note that the partitions on the Lenovo standard disk were already in place for this installation. Also note that my LAN is setup for fixed IP addresses. Downloaded and burned 11.10 x86 Desktop Ubuntu CD Connected the power supply cord, LAN wire and the external DVD USB drive. Ran Windows XP and made sure performance level "Performance" was set and "Wireless" was enabled. Booted S12 from CD Disabled Networking from icon on upper left panel icon Edited Connections… "Wired connection 1" ? Set IP address, accepted default netmask and set gateway. Also set DNS server. Good idea to check "Connection Information" here to verify everything's O.K. Selected Install Ubuntu from the initial "Install" window Verified the three items were checked (required disk space available, plugged into a power source, & connected to the Internet) Selected Download updates while installing and third party software. Hit Continue… At wireless selected don't want to connect…WiFi…now. Continue… At Installation type, selected Something else. Continue… At partition tale, selected the ext4 Linux partition, set the mount point as "/", and marked for formatting. Here I selected the main disk (/sda) for installing the boot manager. Continue… Selected or verified my Time zone. Continue… Selected my keyboard layout. Continue… Filled in the who are you fields. Make sure password is required to sign in is checked. Continue… Chose a picture. Continue… I selected import no accounts. Continue… Wait as the Install creeps along. If your screen goes blank, tap the space bar ? apparently the screen saver/power plan does this. There are several progress bars. The longest was "Installing system", and it was the next to the last one. Installation Complete window appears, Restart Now… Wait as it stops, The screen blanks then the message "…remove…media…close tray…press enter" I just unplugged the USB DVD and hit enter… It was disheartening but the screen turned Ubuntu Purple-beige and nothing happened, so I help down the power key until it shut down, the pressed it again and the Grub Boot screen appeared. Select Ubuntu… 25.The screen went blank with the little flashing underscore cursor on it and the disk light would occasionally flash. I hit the enter key and eventuality Ubuntu started. After a somewhat long time the unity desktop appeared. 11.10, unlike earlier versions, retains the connection information. Check this by checking the network icon on the upper left applet panel. Here the touch-pad·mouse quit working and I had to reboot. It takes and extremely long time to boot, sometimes requiring several power off/ power on (cold boot). You can try to get the default network manager to work, but it might not, it didn't on mine for WiFi. Thanks to: Chris at URL here's what to do… disconnect your wired Internet connection. input your wireless information into network manager open a terminal (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "terminal". Might be a good idea to drag and drop the terminal icon to the terminal, it's easy to get rid of later. click to open a terminal, and type in: sudo rmmod acer_wmi && echo "blacklist acer_wmi" >> /etc/modprobe.d/blacklist.conf and hit enter. type in your password as asked. if you have correctly entered your WiFi information and you are near your AP, you should connect immediately if not, see the URL above ? you might need to replace "network manager" with "wicd" ? I did with 11.04. Update the new 11.10, in the upper left panel applet weird·gear icon is menu with a line about updating. It's the new way to invoke Update Manager. Your lenovo S12 (intel atom) should now run the new unity Ubuntu. Point your elbow at the ceiling and pat yourself on the back. Installing Mythbuntu Client 24.1 Open mythbuntu.org/repos (I urge you not to directly use Ubuntu Software Center for this) Install Mythbuntu Repos Save the file (in ~/Downloads, the default) Run the file ? it will update your repositories so that you will get the proper installation sources ? it will start Ubuntu Software Center to do this ? Click Install… You will need your password. Debconf window will open, select by making sure check mark is in the little box "Would you like to activate…". Forward… Which version? At the time of writing the current "Stable" version was 24.1, select 0.24.x… Forward… Read the message, then forward… Delete the downloaded file. Install synaptic (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "synaptic". Click on the synaptic icon. Ubuntu Software Center will open and allow you to install synaptic package manager. Open Synaptic (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "Synaptic". Might be a good idea to drag and drop the terminal icon to the terminal, it's easy to get rid of later. Run synaptic, read the intro, and close the intro window. Type in mythbuntu-control-centre in the Quick filter text box, and then select it "Mark for installation" by clicking on the box next to it's name. Marvel at the additional to be installed items, then select "?Mark"… At the top of the synaptic window click on the "? Apply" button. Marvel at the amount of stuff to be installed, the click on "Apply". When finished, close finished window and synaptic. Open mythbuntu-control-centre (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythbuntu". Might be a good idea to drag and drop the mythbuntu-control-centre icon to the terminal, it's easy to get rid of later. You can now configure and install the frontend. Go down the icon totem on the right side of the window and click as needed… System roles. ? No Backend, Desktop Frontend, and Ubuntu Desktop. Apply… & Apply changes… & Password… MySQL Configuration ? from backend ? Setup General Alt-N(ext) Alt-N(ext) Stetting Access Setup PIN code: ~~~~ Input Security key and click "Test Connection", if ?, then Apply… & Apply… {note: for some inexplicable reason, control centre hung on this, but when I restarted it, it was set properly} Graphics drivers, When I did this, only the Broadcom wireless driver showed up. I closed without doing anything. Services. I enabled SSH & Samba. Apply… & Apply… Repositories. Asked & Answered. MythExport. Pass, I believe it requires backend on the same system. Proprietary Codec Support. Check to enable, Apply… & Apply… System Updates. No action necessary, will be a part of the Ubuntu update mechanism. Themes and Artwork. For themes, I selected Enable/Update all. Apply… & Apply… Infrared & Startup behavior and Plugins. Defer until you know more. Close software centre. Open mythTV (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythTV". Might be a good idea to drag and drop the mythTV icon to the terminal, it's easy to get rid of later. Incorrect Group Membership. Fix this by clicking "Yes"… Log out/end. Do this by clicking "Yes"… For my Lenovo S12, I had to manually restart Ubuntu - and still with the very long restart…/no start/cold boot/reboot/pressing the shift key required Open mythTV (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythTV". Might be a good idea to drag and drop the mythTV icon to the terminal, it's easy to get rid of later. Will open with Select country & language. Do so. then get message with "No", hit "Ok" and arrive at the data base Configuration 1/2 screen. You will need your brackend password, from backend ? Setup General Database Configuration 1/2 Password:~? Enter this Hit Alt-n to go to the next page. Select "Use custom id…", then enter a custom ID, I use the machine's name. Hit finish, and MythTV should start up with all default settings. For the lenovo S12, the first thing you want to do is to set Playback profiles to "Normal". From Setup TV Settings Playback Alt-N(ext) Alt-N(ext) Playback Profiles (3/8) : Change Current Video Playback Profile to "Normal". You can fiddle with this setting later. For the lenovo S12, the second thing is to get the sound going. From Setup General Alt-N(ext) Alt-N(ext) Alt-N(ext) Audio System: The top of the screen is a button title "Scan for audio devices", move the highlight there and press the Space bar. Then Tab down to Audio Output Device: and left-right arrow until "ALSA:hw:Card=Intel,DEV=0" is selected. Then Alt-N(ext) until "Finish". Now you should have sound. You should now have MythTV working nicely on the Lenovo S12 Notes about wireless: Running Lenovo S12 on wireless is demanding on both power and WiFi connection. Best results will be obtained when running on power and wired connection. I run my S12 on wireless, actually two serial connections with two access points, something that is not easy to achieve. Here Mythbuntu client-server (in den) <? wireless link 1 <?office LAN? wireless link 2 <? Lenovo S12 Ubuntu 11.10 The office LAN is fixed IP behind an Untangle firewall router. There is another MythTV client on Ubuntu 10.10 computer in the office (which has always worked well). ProblemMythbuntu\Win7 client hangs with frozen frames, short segment of audio repeating. Hardware Rosewill RNX-G300EX IEEE 802.11b/g PCI Wireless Card on client-server 2 Linksys WRT54GL wireless broadband routers on LAN for link1 and link 2 WRT54GL FirmwareDD-WRT v24-sp2(07/22/09) voip set up to act as an access point. Note? many people advised this was an unworkable scheme, and in probably most cases it will be. Solution? Set up DD-WRT with the following Wireless settings… Basic Channel: Different fixed channels at least 4 difference, I use 6 & 11 Basic Sensitivity Range (ACK timing): 50 MAC filter use filter: Enable, Selected Permit only clients listed to access… Requires adding MAC addresses in "Edit MAC Filter List" This causes the 54GL's to ignore any but the listed MAC address, down side, no "guest" capability. Advanced Basic rate: All Advanced CTS Protection Mode: Off Advanced Frame Burst: Enable Advanced Max associate clients: 4 for client link 2, 1 for client-server link 1 Advanced AP isolation: Enable Advanced Preamble: Short Advanced Afterburner: On Advanced Wireless GUI access: Off Advanced WMM support: Off Other settings: default for supplied firmware. Why I suspect this worked? The 54GL Access Points's with the firmware's setting are set to handle a multiple client, wide area situation. With these mods I reconfigured them for a small area, few client situation, disabling Advanced WMM probably the most important. In addition, the client mythtv when used all other users of its access point are turned off except for a Skype phone. Also, the client-server is set up to allow other connections though it's LAN connection, and these are used to connect the TV and disc players, not used when client is being used.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >