Search Results

Search found 60181 results on 2408 pages for 'meta data'.

Page 471/2408 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • Is Moving Entity Framework objects over a webservice really the best way?

    - by aceinthehole
    I've inherited a .NET project that has close to 2 thousand clients out in the field that need to push data periodically up to a central repository. The clients wake up and attempt to push the data up via a series of WCF webservices where they are passing each entity framework entity as parameter. Once the service receives this object, it preforms some business logic on the data, and then turns around and sticks it in it's own database that mirrors the database on the client machines. The trick is, is that this data is being transmitted over a metered connection, which is very expensive. So optimizing the data is a serious priority. Now, we are using a custom encoder that compresses the data (and decompresses it on the other end) while it is being transmitted, and this is reducing the data footprint. However, the amount of data that the clients are using, seem ridiculously large, given the amount of information that is actually being transmitted. It seems me that entity framework itself may be to blame. I'm suspecting that the objects are very large when serialized to be sent over wire, with a lot context information and who knows what else, when what we really need is just the 'new' inserts. Is using the entity framework and WCF services as we have done so far the correct way, architecturally, of approaching this n-tiered, asynchronous, push only problem? Or is there a different approach, that could optimize the data use?

    Read the article

  • Designing communications for extensibility

    - by Thomas S.
    I am working on the design stages of an application that will a) collect data from various sources (in my case that's scientific data from serial ports), keeping track of the age of the data, b) generate real-time statistics (e.g. running averages) c) display, record, and otherwise handle the data (and statistics). I anticipate that I will be adding both data producers and consumers over time, and would like to design this application abstractly so that I will be able to trivially add functionality with a small amount of interface code. What I'm stumbling on is deciding what communication infrastructure I should use to handle the interfaces. In particular, how should I make the processed data and statistics available to multiple consumers? Some things I've considered: Writing to several named pipes (variable number). Each consumer reads from one of them. Using FUSE to make a userspace filesystem where a read() returns the latest line of data even if another process has already read it. Making a TCP server, and having consumers connect and request data individually. Simply writing the consumers as part of the same program that aggregates the data. So I would like to hear your all's advice on deciding how to interface these functions in the best way to keep them separate and allow room for extenstions.

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • What does the ".align" x86 Assembler directive do exactly? [migrated]

    - by Sinister Clock
    I will list exactly what I do not understand, and show you the parts I can not understand as well. First off, The .Align Directive .align integer, pad. The .align directive causes the next data generated to be aligned modulo integer bytes 1.~ ? : What is implied with "causes the next data generated to be aligned modulo integer bytes?" I can surmise that the next data generated is a memory-to-register transfer, no? Modulo would imply the remainder of a division. I do not understand "to be aligned modulo integer bytes"....... What would be a remainder of a simple data declaration, and how would the next data generated being aligned by a remainder be useful? If the next data is aligned modulo, that is saying the next generated data, whatever that means exactly, is the remainder of an integer? That makes absolutely no sense. What specifically would the .align, say, .align 8 directive issued in x86 for a data byte compiled from a C char, i.e., char CHARACTER = 0; be for? Or specifically coded directly with that directive, not preliminary Assembly code after compiling C? I have debugged in Assembly and noticed that any C/C++ data declarations, like chars, ints, floats, etc. will insert the directive .align 8 to each of them, and add other directives like .bss, .zero, .globl, .text, .Letext0, .Ltext0. What are all of these directives for, or at least my main asking? I have learned a lot of the main x86 Assembly instructions, but never was introduced or pointed at all of these strange directives. How do they affect the opcodes, and are all of them necessary?

    Read the article

  • PHP Code (modules) included via MySQL database, good idea?

    - by ionFish
    The main script includes "modules" which add functionality to it. Each module is set up like this: <?php //data collection stuff //(...) approx 80 lines of code //end data collection $var1 = 'some data'; $var2 = 'more data'; $var3 = 'other data'; ?> Each module has the same exact variables, just the data collection is different. I was wondering if it's a reasonable idea to store the module data in MySQL like this: [database] |_modules |_name |_function (the raw PHP data from above) |_description |_author |_update-url |_version |_enabled ...and then include the PHP-data from the database and execute it? Something like, a tab-navigation system at the top of the page for each module name, then inside each of those tabs the page content would function by parsing the database-stored code of the module from the function section. The purpose would be to save code space (fewer lines), allow for easy updates, and include/exclude modules based on the enabled option. This is how many other web-apps work, some of my own too. But never had I thought about this so deeply. Are there any drawbacks or security risks to this?

    Read the article

  • ASP.NET Creating a Rich Repeater, DataBind wiping out custom added controls...

    - by tonyellard
    So...I had this clever idea that I'd create my own Repeater control that implements paging and sorting by inheriting from Repeater and extending it's capabilities. I found some information and bits and pieces on how to go about this and everything seemed ok... I created a WebControlLibrary to house my custom controls. Along with the enriched repeater, I created a composite control that would act as the "pager bar", having forward, back and page selection. My pager bar works 100% on it's own, properly firing a paged changed event when the user interacts with it. The rich repeater databinds without issue, but when the databind fires (when I call base.databind()), the control collection is cleared out and my pager bars are removed. This screws up the viewstate for the pager bars making them unable to fire their events properly or maintain their state. I've tried adding the controls back to the collection after base.databind() fires, but that doesn't solve the issue. I start to get very strange results including problems with altering the hierarchy of the control tree (resolved by adding [ViewStateModeById]). Before I go back to the drawing board and create a second composite control which contains a repeater and the pager bars (so that the repeater isn't responsible for the pager bars viewstate) are there any thoughts about how to resolve the issue? In the interest of share and share alike, the code for the repeater itself is below, the pagerbars aren't as significant as the issue is really the maintaining of state for any additional child controls. (forgive the roughness of some of the code...it's still a work in progress) using System; using System.Collections.Generic; using System.ComponentModel; using System.Text; using System.Data; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; [ViewStateModeById] public class SortablePagedRepeater : Repeater, INamingContainer { private SuperRepeaterPagerBar topBar = new SuperRepeaterPagerBar(); private SuperRepeaterPagerBar btmBar = new SuperRepeaterPagerBar(); protected override void OnInit(EventArgs e) { Page.RegisterRequiresControlState(this); InitializeControls(); base.OnInit(e); EnsureChildControls(); } protected void InitializeControls() { topBar.ID = this.ID + "__topPagerBar"; topBar.NumberOfPages = this._currentProperties.numOfPages; topBar.CurrentPage = this.CurrentPageNumber; topBar.PageChanged += new SuperRepeaterPagerBar.PageChangedEventHandler(PageChanged); btmBar.ID = this.ID + "__btmPagerBar"; btmBar.NumberOfPages = this._currentProperties.numOfPages; btmBar.CurrentPage = this.CurrentPageNumber; btmBar.PageChanged += new SuperRepeaterPagerBar.PageChangedEventHandler(PageChanged); } protected override void CreateChildControls() { EnsureDataBound(); this.Controls.Add(topBar); this.Controls.Add(btmBar); //base.CreateChildControls(); } private void PageChanged(object sender, int newPage) { this.CurrentPageNumber = newPage; } public override void DataBind() { //pageDataSource(); //DataBind removes all controls from control collection... base.DataBind(); Controls.Add(topBar); Controls.Add(btmBar); } private void pageDataSource() { //Create paged data source PagedDataSource pds = new PagedDataSource(); pds.PageSize = this.ItemsPerPage; pds.AllowPaging = true; // first get a PagedDataSource going and perform sort if possible... if (base.DataSource is System.Collections.IEnumerable) { pds.DataSource = (System.Collections.IEnumerable)base.DataSource; } else if (base.DataSource is System.Data.DataView) { DataView data = (DataView)DataSource; if (this.SortBy != null && data.Table.Columns.Contains(this.SortBy)) { data.Sort = this.SortBy; } pds.DataSource = data.Table.Rows; } else if (base.DataSource is System.Data.DataTable) { DataTable data = (DataTable)DataSource; if (this.SortBy != null && data.Columns.Contains(this.SortBy)) { data.DefaultView.Sort = this.SortBy; } pds.DataSource = data.DefaultView; } else if (base.DataSource is System.Data.DataSet) { DataSet data = (DataSet)DataSource; if (base.DataMember != null && data.Tables.Contains(base.DataMember)) { if (this.SortBy != null && data.Tables[base.DataMember].Columns.Contains(this.SortBy)) { data.Tables[base.DataMember].DefaultView.Sort = this.SortBy; } pds.DataSource = data.Tables[base.DataMember].DefaultView; } else if (data.Tables.Count > 0) { if (this.SortBy != null && data.Tables[0].Columns.Contains(this.SortBy)) { data.Tables[0].DefaultView.Sort = this.SortBy; } pds.DataSource = data.Tables[0].DefaultView; } else { throw new Exception("DataSet doesn't have any tables."); } } else if (base.DataSource == null) { // don't do anything? } else { throw new Exception("DataSource must be of type System.Collections.IEnumerable. The DataSource you provided is of type " + base.DataSource.GetType().ToString()); } if (pds != null && base.DataSource != null) { //Make sure that the page doesn't exceed the maximum number of pages //available if (this.CurrentPageNumber >= pds.PageCount) { this.CurrentPageNumber = pds.PageCount - 1; } //Set up paging values... btmBar.CurrentPage = topBar.CurrentPage = pds.CurrentPageIndex = this.CurrentPageNumber; this._currentProperties.numOfPages = btmBar.NumberOfPages = topBar.NumberOfPages = pds.PageCount; base.DataSource = pds; } } public override object DataSource { get { return base.DataSource; } set { //init(); //reset paging/sorting values since we've potentially changed data sources. base.DataSource = value; pageDataSource(); } } protected override void Render(HtmlTextWriter writer) { topBar.RenderControl(writer); base.Render(writer); btmBar.RenderControl(writer); } [Serializable] protected struct CurrentProperties { public int pageNum; public int itemsPerPage; public int numOfPages; public string sortBy; public bool sortDir; } protected CurrentProperties _currentProperties = new CurrentProperties(); protected override object SaveControlState() { return this._currentProperties; } protected override void LoadControlState(object savedState) { this._currentProperties = (CurrentProperties)savedState; } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue("")] [Localizable(false)] public string SortBy { get { return this._currentProperties.sortBy; } set { //If sorting by the same column, swap the sort direction. if (this._currentProperties.sortBy == value) { this.SortAscending = !this.SortAscending; } else { this.SortAscending = true; } this._currentProperties.sortBy = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(true)] [Localizable(false)] public bool SortAscending { get { return this._currentProperties.sortDir; } set { this._currentProperties.sortDir = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(25)] [Localizable(false)] public int ItemsPerPage { get { return this._currentProperties.itemsPerPage; } set { this._currentProperties.itemsPerPage = value; } } [Category("Status")] [Browsable(true)] [NotifyParentProperty(true)] [DefaultValue(1)] [Localizable(false)] public int CurrentPageNumber { get { return this._currentProperties.pageNum; } set { this._currentProperties.pageNum = value; pageDataSource(); } } }

    Read the article

  • How to simulate browser form POST method using PHP/cURL

    - by user283266
    I'm trying to simulate browser with POST method using PHP/cURL. When I looked at that live Http header it shows Content-Type: multipart/form-data. I checked on the internet where it was suggested that cURL will send multipart/form-data when a custom headers is specified to Content-Type: multipart/form-data. $headers = array( 'Content-Type' => 'multipart/form-data; boundary='.$boundary ); This didn't work for me either when I print_r(curl_getinfo()) it showed [content_type] => text/html; charset=UTF-8 Which means cURL sent a default headers I also read that sending/uploading a file with cURL will cause data to be send as multipart/form-data. I created a file which curl uploaded but again when I ran curl_getinfo I got [content_type] => text/html; charset=UTF-8 $data_array = array("field" => "@c:\file_location.txt"); I also tried to read a file content so that the only thing sent would be content NOT ATTACHED FILE but this didn't work for me curl_getinfo shows [content_type] => text/html; charset=UTF-8. $data_array = array("field" => "<c:\file_location.txt"); // note @ replaced with < Do I miss somthing here? This is the referer url POST somepath HTTP/1.1 Host: www(dot)domain(dot)com User-Agent: Mozilla/5.0 (Windows) Gecko/13081217 Firefox/3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: url/some-file.php Content-Type: multipart/form-data; boundary=--------------------------$boundary Content-Length: $some_number ----------------------------$boundary Content-Disposition: form-data; name="$some_Value1" $some_text1 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value2" $some_text2 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value3" $some_text3 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value4" $some_text4 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value5" $some_text5 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value6" $some_text6 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value7" $some_text7 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value8" $some_text8 ----------------------------$boundary Content-Disposition: form-data; name="$some_Value9" ----------------------------$boundary Content-Disposition: form-data; name="$some_Value10" ----------------------------$boundary-- Here is a piece of code. <? //Include files set_time_limit(0); include'body.php'; include'keyword.php'; include'bio.php'; include'summary.php'; include'headline.php'; include'category.php'; include'spin.php'; include'random-text.php'; $category = category(); $headline = headline() ; $summary = summary(); $keyword = keyword(); $body = body(); $bio = bio(); $target="url"; $ref ="url_ref"; $c = "Content-Disposition: form-data; name="; $boundary = "---------------------------".random_text(); $category = category(); $headline = headline() ; $summary = summary(); $keyword = keyword(); $body = body(); $bio = bio(); // emulating content form as it appears on livehttp header $data = "\r\n".$boundary."\r\n".$c."\"pen_id\"\r\n\r\n".$Auth_id."\r\n".$boundary."\r\n".$c."\"cat_id\"\r\n\r\n".category()."\r\n".$boundary."\r\n".$c."\"title\"\r\n\r\n".headline()."\r\n".$boundary."\r\n".$c."\"meta_desc\"\r\n\r\n".summary()."\r\n".$boundary."\r\n".$c."\"meta_keys\"\r\n\r\n".keyword()."\r\n".$boundary."\r\n".$c."\"content\"\r\n\r\n".body()."\r\n".$boundary."\r\n".$c."\"author_bio\"\r\n\r\n".bio()."\r\n".$boundary."\r\n".$c."\"allow_comments\"\r\n\r\ny\r\n".$boundary."\r\n".$c."\"id\"\r\n\r\n\r\n".$boundary."\r\n".$c."\"action\"\r\n\r\n\r\n".$boundary."--\r\n"; // inserting content into a file $file = "C:\file_path.txt"; $fh = fopen($file, 'w+') or die("Can't open file"); fwrite($fh,$data); fclose($fh); // pulling out content from a file as multipart/form-data $data_array = array ("field" => "<C:\file_path.txt"); $headers = array ( 'POST /myhome/article/new HTTP/1.1', 'Host: url', 'User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.20) Gecko/20081217 Firefox/2.0.0.20 (.NET CLR 3.5.30729)', 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9;q=0.8', 'Accept-Language: en-us,en;q=0.5', 'Accept-Encoding: gzip,deflate', 'Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'Keep-Alive: 300', 'Connection: keep-alive', 'Content-Type: multipart/form-data; boundary='.$boundary, 'Content-Length: '.strlen($data), ); # Create the cURL session $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $target); // Define target site curl_setopt($ch, CURLOPT_POST,1); curl_setopt($ch, CURLOPT_HEADER, $headers); // No http head //curl_setopt($ch, CURLOPT_REFERER, $ref); curl_setopt($ch, CURLOPT_NOBODY, FALSE); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); // Return page in string curl_setopt($ch, CURLOPT_COOKIEJAR, "c:\cookie\cookies.txt"); // Tell cURL where to write curl_setopt($ch, CURLOPT_COOKIEFILE, "c:\cookie\cookies.txt"); // Tell cURL which cookies //curl_setopt($ch, CURLOPT_USERAGENT, $agent); curl_setopt($ch, CURLOPT_POST, TRUE); curl_setopt($ch, CURLOPT_POSTFIELDS, "$data_array"); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follow redirects curl_setopt($ch, CURLOPT_MAXREDIRS, 4); # Execute the PHP/CURL session and echo the downloaded page $page = curl_exec($ch); $err = curl_error($ch); $info =curl_getinfo($ch); # Close the cURL session curl_close($ch); print_r($err); print_r($info); ?>

    Read the article

  • The Minimalist Approach to Content Governance - Retire Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. Good news - the Retire Phase is actually more fun than the Manage Phase. During the Retire Phase our content management team should not have to track down content creators if the Request Phase of this process was completed successfully. The ownership meta data, success criteria and time stamp that was applied to the original content submission will help to manage content at the end of the content life cycle. The Retire Phase will provide the opportunity for us to prune irrelevant content items through archiving or deletion, keeping the content system clear of irrelevant information, streamlining users ability to browse and search for content.   1. Act on Metrics Established during the Request Phase Why - Some information is only relevant for a given amount of time. In Content Platform Migration Strategy - Artifacts vs Perishable Content we examined two content types - Artifacts and Perishable content. Understanding the differences between Artifacts and Perishable content will allow us to explicitly respect their various lifespans. Additionally, some content may have been part of a project that failed to meet the success criteria outlined in the Request Phase. Any content that did not meet the metrics outlined in the Request Phase should be considered for deletion. How - Thankfully by adhering to to The Minimalist Approach to Content Governance our content should have some level of meta data associated with it that will allow us to quickly sort and understand how to deal with it. Content Management Systems like Oracle's Universal Content Management (UCM) natively allow you to create and save advanced searches that can use content meta data like folders, author, expiration date, security settings and custom meta data to pull back listings of content for examination. Additionally, analytics are available for all content items that allow us to determine if the usage is meeting success criteria that may have been previously outlined during the request phase. The lists that are produced from these approaches can be quickly reviewed for each project with the content owners and based on the nature of the content and success criteria undergo archiving or deletion. Impact - Retiring content that is no longer relevant will allow end users to have fast and relevant access to information across your enterprise. As we mentioned in our first post in this series - it is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year. The light level of effort that was placed into the Request Phase of this process will set us up to keep content clean and relevant for a long time to come. With an up-to-date content repository users will be able to quickly find access to the information that is critical to their work processes. You might not get a holiday named in your honor managing the content system, but will appreciate their quick access to quality information.

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • Managing 404 error pages with noindex and url rewrite

    - by ZenMaster
    Currently I use custom 404 error pages, having the following meta on them : <meta content="noindex" name="robots"> My guess is this way Google will remove deleted pages faster from the index, anyone has experienced a case where it does ? Also, is it better to have the url path rewritten to the actual error page, like the url pattern: http://{mysite}/{404_error_page} or is it best to keep the old deleted page's url when serving a 404 error ?

    Read the article

  • Star rating not showing in rich snippets

    - by Danny R
    We've recently been doing a lot of work on our site's SEO (www.betterthanreviews.com). We recently did a push to update the rich snippets breadcrumb, meta description, and star rating. After giving Google some time to index the site, it has updated the breadcrumbs and meta descriptions for our review pages, but the stars are still not showing. This is currently how it appears on a Google search (link to the actual page: http://www.betterthanreviews.com/home-security/livewatch): This is what the Rich Snippets is supposed to look like, and how it appears in Google's testing tool: More context: As seen in our html, we are using schema.org language. We initially were using schema.org/Corporation for the site, but we now have the page labeled as schema.org/HomeAndConstructionBusiness because Google will not show star ratings for the Corporation language. However, in our Webmaster Tools, the Structured Data is still showing the Corporation language, which could be a potential issue. Here is a look at some of the coding that we used. But it can be looked at closer by inspecting the element: <div class="aggregate-rating" itemprop="aggregateRating" itemscope="" itemtype="http://schema.org/AggregateRating"> <div class="review row_fluid" itemprop="review" itemscope="" itemtype="http://schema.org/Review"> <div class="row_fluid rating" itemprop="reviewRating" itemscope="" itemtype="http://schema.org/Rating"> <meta content="4.5" itemprop="ratingValue" title="4.5 out of 5 stars" class="star-rating-readonly"> <meta content="2013-12-05" itemprop="datePublished"> <p class="review-headline" itemprop="headline">Way better than my previous system</p> <div> <p class="reviewer" itemprop="author">Scott H. </p> <span class="bullet">•</span> <p class="created_at">2 months ago</p> <p class="content" itemprop="description">I love it! The experience I have had so far is extremely positive. I had another alarm system before and I didn't like it but this one is really nice. I am telling everybody about it.</p> </div> </div> Any suggestions for how to fix this?

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How do I control how often search engines visit my site?

    - by Nick
    I've been using the following line in the <head> of my sites for years: <meta name="revisit-after" content="3 days" /> I recently discovered that it's not one of the meta tags that Google understands, which I take to mean that there's no point in including it, and that it's been doing no good at all for years. How often do search engines crawl a website by default, and what reliable ways are there to increase or decrease that frequency?

    Read the article

  • If all programming languages are Turing Complete then why do we have language wars?

    - by kadaj
    There are language wars saying one programming language is better than other.. Consider Lisp and Java; and we can argue that the meta programming capabilities of Lisp is better than that of Java. But that does not mean Java cannot have meta programming capabilities without being another dialect of Lisp. Basically all programming languages are Turing Complete. So doesn't that mean we could solve any solvable problem in all those programming languages?

    Read the article

  • 3 Pitfalls of Link Building

    The search engines have experienced a growth recently and their way of choosing page ranks has been altered. Up until recently, Meta tags placed on your page was all you needed to get ranked for various keywords. Then, Meta tags were no longer used because they produced results that weren't accurate.

    Read the article

  • problems in fetching upgrades

    - by andre
    presently using ubuntu 10.04 and I would like to upgrade to the latest ubuntu release but while trying to ugrade I have this errors . Can anybody help me how to solve this and proceed..just a beginner in using ubuntu. thanks W: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux-meta/linux-image-generic_2.6.32.38.44_i386.deb 404 Not Found [IP: 91.189.92.166 80] W: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux-meta/linux-headers-generic_2.6.32.38.44_i386.deb 404 Not Found [IP: 91.189.92.166 80]

    Read the article

  • Judgment Calls in SEO Add Up to Results

    The titles and descriptions seen above the URLs on search engine results pages are taken by the search engines from the Meta data of the pages at first until other options are planted in directories during an SEO campaign. If a site has no meta description and no SEO content out there on the Web, the search engine selects some relevant snippets of content from somewhere on the site. The answer to the query may be there; if not the searcher will have to access the site and look for the information.

    Read the article

  • What Are the Most Effective SEO Services?

    The most effective SEO services are link building campaigns based on fundamental changes to site architecture, and unique, user-generated content. Site architecture consists of the page title meta description and keywords, collectively known as the meta data (technical aspects) of your site.

    Read the article

  • jboss 5.0 data source configuration in ear file. How can I run oracle 10g and 11g on the same serve

    - by Joe
    Currently my setup is: in my ear META-INF/jboss-app.xml <jboss-app> <service>datasource-ds.xml</service> </module> and datasource-ds.xml <datasources> <local-tx-datasource> <jndi-name>jdbc/mydeployment</jndi-name> <connection-url>jdbc:oracle:thin:@eir:myport:mydbname</connection-url> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <user-name>myuser</user-name> <password>mypassword</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name> <metadata> <type-mapping>Oracle9i</type-mapping> </metadata> </local-tx-datasource> </datasources> and it works when ojdbc5.jar is in my servername/lib How can I config my oracle driver information in my .ear file so that I can have two different ear deployments, one using oracle 10g and one using oracle 11g?

    Read the article

  • How can I automatically generate sql update scripts when some data is updated ?

    - by Brann
    I'd like to automatically generate an update script each time a value is modified in my database. In other words, if a stored procedure, or a query, or whatever updates column a with value b in table c (which as a pk column (i,j...k), I want to generate this : update c set a=b where i=... and j=... and k=... and store it somewhere (for example as a raw string in a table). To complicate things, I want the script to be generated only if the update has been made by a specific user. Good news is that I've got a primary key defined for all my tables. I can see how to do this using a trigger, but I would need to generate specific triggers for each table, and to update them each and every-time my schema changes. I guess there are some built-in ways to do this as SQL server sometimes need to store this kind of things (while using transactional replication for example), but couldn't find anything so far ... any ideas ? I'm also interested in ways to automatically generate triggers (probably using triggers - meta triggers, huh? - since I will need to update triggers automatically when the schema change)

    Read the article

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • How can I make a WPF TreeView data binding lazy and asynchronous?

    - by pauldoo
    I am learning how to use data binding in WPF for a TreeView. I am procedurally creating the Binding object, setting Source, Path, and Converter properties to point to my own classes. I can even go as far as setting IsAsync and I can see the GUI update asynchronously when I explore the tree. So far so good! My problem is that WPF eagerly evaluates parts of the tree prior to them being expanded in the GUI. If left long enough this would result in the entire tree being evaluated (well actually in this example my tree is infinite, but you get the idea). I would like the tree only be evaluated on demand as the user expands the nodes. Is this possible using the existing asynchronous data binding stuff in the WPF? As an aside I have not figured out how ObjectDataProvider relates to this task. My XAML code contains only a single TreeView object, and my C# code is: public partial class Window1 : Window { public Window1() { InitializeComponent(); treeView.Items.Add( CreateItem(2) ); } static TreeViewItem CreateItem(int number) { TreeViewItem item = new TreeViewItem(); item.Header = number; Binding b = new Binding(); b.Converter = new MyConverter(); b.Source = new MyDataProvider(number); b.Path = new PropertyPath("Value"); b.IsAsync = true; item.SetBinding(TreeView.ItemsSourceProperty, b); return item; } class MyDataProvider { readonly int m_value; public MyDataProvider(int value) { m_value = value; } public int[] Value { get { // Sleep to mimick a costly operation that should not hang the UI System.Threading.Thread.Sleep(2000); System.Diagnostics.Debug.Write(string.Format("Evaluated for {0}\n", m_value)); return new int[] { m_value * 2, m_value + 1, }; } } } class MyConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { // Convert the double to an int. int[] values = (int[])value; IList<TreeViewItem> result = new List<TreeViewItem>(); foreach (int i in values) { result.Add(CreateItem(i)); } return result; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new InvalidOperationException("Not implemented."); } } } Note: I have previously managed to do lazy evaluation of the tree nodes by adding WPF event handlers and directly adding items when the event handlers are triggered. I'm trying to move away from that and use data binding instead (which I understand is more in spirit with "the WPF way").

    Read the article

  • ASP.NET MVC2 - Does Html.EnableClientValidation() work on the nested data model?

    - by warmcold
    I have seen the client side validation examples and videos on internet by using Html.EnableClientValidation(). But all target on the simple data model. Does the Html.EnableClientValidation() work on the nested data model like below? public class Person { public Name Name { get; set; } public string Gender { get; set; } } public class Name { public string First { get; set; } public string Last { get; set; } }

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >