Search Results

Search found 6559 results on 263 pages for 'parallel foreach'.

Page 35/263 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • How does Array.ForEach() compare to standard for loop in C#?

    - by DaveN59
    I pine for the days when, as a C programmer, I could type: memset( byte_array, '0xFF' ); and get a byte array filled with 'FF' characters. So, I have been looking for a replacement for this: for (int i=0; i < byteArray.Length; i++) { byteArray[i] = 0xFF; } Lately, I have been using some of the new C# features and have been using this approach instead: Array.ForEach<byte>(byteArray, b => b = 0xFF); Granted, the second approach seems cleaner and is easier on the eye, but how does the performance compare to using the first approach? Am I introducing needless overhead by using Linq and generics? Thanks, Dave

    Read the article

  • How Can I Configure Selenium grid to test website in parallel?

    - by prakash.panjwani
    Hello Friends, I want to use selenium grid for my web page testing. I have successfully installed the demo of selenium grid on my PC and it is running fine. I have followed this link to install and run the selenium grid demo. I am trying to code a java program using selenium rc which can run with selenium grid for testing the web site, but I am not getting how does I make change on the selenium grid existing demo so that it will work for my web test. Can some body provide me any link/example so that I will do that?

    Read the article

  • parallel vms in VMWare Server - how to configure network so they can ping each other?

    - by IronGoofy
    I'm using VMWare Server (currently on Version 1.0.7) and have two VMs that I would like to run at the same time. However, I'm having problems in setting them up so they can ping each other. I've configured them to use 'Bridged' networking. They both obtain an IP address from the DHCP server on my network, but after that they can't ping each other. It seems that only the first one has a functioning network connection (I can ping it from the host machine and Internet connection works), but the other one does not. If it helps, both VMs are running XP SP 3. Any ideas? Thanks!

    Read the article

  • How do I configure a C# web service client to send HTTP request header and body in parallel?

    - by Christopher
    Hi, I am using a traditional C# web service client generated in VS2008 .Net 3.5, inheriting from SoapHttpClientProtocol. This is connecting to a remote web service written in Java. All configuration is done in code during client initialization, and can be seen below: ServicePointManager.Expect100Continue = false; ServicePointManager.DefaultConnectionLimit = 10; var client = new APIService { EnableDecompression = true, Url = _url + "?guid=" + Guid.NewGuid(), Credentials = new NetworkCredential(user, password, null), PreAuthenticate = true, Timeout = 5000 // 5 sec }; It all works fine, but the time taken to execute the simplest method call is almost double the network ping time. Whereas a Java test client takes roughly the same as the network ping time: C# client ~ 550ms Java client ~ 340ms Network ping ~ 300ms After analyzing the TCP traffic for a session discovered the following: Basically, the C# client sent TCP packets in the following sequence. Client Send HTTP Headers in one packet. Client Waits For TCP ACK from server. Client Sends HTTP Body in one packet. Client Waits For TCP ACK from server. The Java client sent TCP packets in the following sequence. Client Sends HTTP Headers in one packet. Client Sends HTTP Body in one packet. Client Revieves ACK for first packet. Client Revieves ACK for second packet. Client Revieves ACK for second packet. Is there anyway to configure the C# web service client to send the header/body in parallel as the Java client appears to? Any help or pointers much appreciated.

    Read the article

  • How to compare 2 complex spreadsheets running in parallel for consistency with each other?

    - by tbone
    I am working on converting a large number of spreadsheets to use a new 3rd party data access library (converting from third party library #1 to third party library #2). fyi: a call to a UDF (user defined function) is placed in a cell, and when that is refreshed, it pulls the data into a pivot table below the formula. Both libraries behave the same and produce the same output, except, small irregularites can arise, such as an additional field being shown in the output pivot table using library #2, which can affect formulas on the sheet if data is being read from the pivot table without using GetPivotData. So I have ~100 of these very complicated (20+ worksheets per workbook) spreadsheets that I have to convert, and run in parallel for a period of time, to see if the output using the new data access library matches the old library. Is there some clever approach to do this, so I don't have to spend a large amount of time analyzing each sheet to determine the specific elements to compare? Two rough ideas that come to mind: 1. just create a Validator workbook that has the same # of worksheets, and simply do a Worbook1!Worksheet1!A1 - Worbook2!Worksheet3!A1 for every possible cell on each sheet 2. roughly the equivalent of #1, but just traverse the cells in the 2 books using VBA, and log any cells that do not match. I don't particularly like either idea, can anyone think of something better than this, maybe some 3rd party utility I could buy?

    Read the article

  • Writing functions of tuples conveniently in Scala

    - by Alexey Romanov
    Quite a few functions on Map take a function on a key-value tuple as the argument. E.g. def foreach(f: ((A, B)) ? Unit): Unit. So I looked for a short way to write an argument to foreach: > val map = Map(1 -> 2, 3 -> 4) map: scala.collection.immutable.Map[Int,Int] = Map(1 -> 2, 3 -> 4) > map.foreach((k, v) => println(k)) error: wrong number of parameters; expected = 1 map.foreach((k, v) => println(k)) ^ > map.foreach({(k, v) => println(k)}) error: wrong number of parameters; expected = 1 map.foreach({(k, v) => println(k)}) ^ > map.foreach(case (k, v) => println(k)) error: illegal start of simple expression map.foreach(case (k, v) => println(k)) ^ I can do > map.foreach(_ match {case (k, v) => println(k)}) 1 3 Any better alternatives?

    Read the article

  • Should I learn two (or more) programming languages in parallel?

    - by c_maker
    I found entries on this site about learning a new programming language, however, I have not come across anything that talks about the advantages and disadvantages of learning two languages at the same time. Let's say my goal is to learn two new languages in a year. I understand that the definition of learning a new language is different for everyone and you can probably never know everything about a language. I believe in most cases the following things are enough to include the language in your resume and say that you are proficient in it (list is not in any particular order): Know its syntax so you can write a simple program in it Compare its underlying concepts with concepts of other languages Know best practices Know what libraries are available Know in what situations to use it Understand the flow of a more complex program At least know most of what you do not know I would probably look for a good book and pick an open source project for both of these languages to start with. My questions: Is it best to spend 5 months learning language#1 then 5 months learning language#2, or should you mix the two. Mixing them I mean you work on them in parallel. Should you pick two languages that are similar or different? Are there any advantages/disadvantages of let's say learning Lisp in tandem with Ruby? Is it a good idea to pick two languages with similar syntax or would it be too confusing? Please tell me what your experiences are regarding this. Does it make a difference if you are a beginner or a senior programmer?

    Read the article

  • making query from different related tables using codeigniter

    - by fatemeh karam
    I'm using codeigniter as i mentioned this is a part of my view code foreach($projects_query as $row)// $row indicates the projects { ?> <tr><td><h3><button type="submit" class="button red-gradient glossy" name = "project_click" > <?php echo $row->txtTaskName; ?></button></h3></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr> <?php foreach($tasks_query as $row2) { // if( $row->txtTaskName == "TestProject") if($row->intTaskID == $row2->intInside)// intInside indicades that the current task($row2) is the subset of which task (system , subsystem or project) { if($row2->intSummary == 0)//if the task(the system) is an executable task & doesn't have any subtask: { $query_team_user_id = $this->admin_in_out_model->get_user_team_task_query($row2->intTaskID);//runs the function and generates a query from tbl_userteamtask where intTaskID equals to the selected row's intTaskID foreach($query_team_user_id as $row_teamid) { $query_teamname = $this->admin_in_out_model->get_team_name($row_teamid->intTeamID); $query_fn_ln = $this->admin_in_out_model->get_fn_ln_from_userid($row_teamid->intUserID); foreach($query_teamname as $row_teamname) {?> <tr><td></td><td></td><td><h4> <?php echo $row2->txtTaskName;?></h4></td> <td><b><font color='#F33558'><?php echo $row_teamname->txtTeamName;?></font></b></td> <?php } foreach($query_fn_ln as $row_f_l_name) {?> <td> <?php echo $row_f_l_name->txtFirstname." ".$row_f_l_name->txtLastname;?></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td> <?php }?> </tr> <?php } } else{ ?> <tr><td></td><td></td><td><h4> <?php echo $row2->txtTaskName;?></h4></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><?php } foreach($tasks_query as $row_subsystems) { if($row_subsystems->intInside == $row2->intTaskID )//if the task is the subtask of a system(it means the task is a subsystem) { if($row_subsystems->intSummary == 0)//if the task is an executable task & doesn't have any subtask: { $query_team_user_id = $this->admin_in_out_model->get_user_team_task_query($row_subsystems->intTaskID); foreach($query_team_user_id as $row_teamid) {?> <tr><?php $query_teamname = $this->admin_in_out_model->get_team_name($row_teamid->intTeamID); $query_fn_ln = $this->admin_in_out_model->get_fn_ln_from_userid($row_teamid->intUserID); foreach($query_teamname as $row_teamname) {?> <td></td><td></td><td><h5><?php echo $row_subsystems->txtTaskName?></h5><br/></td> <td><b><font color='#F33558'><?php echo $row_teamname->txtTeamName;?></font></b></td><?php } foreach($query_fn_ln as $row_f_l_name) {?> <td><?php echo $row_f_l_name->txtFirstname." ".$row_f_l_name->txtLastname;?></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><?php }?> </tr><?php } } else{ ?><tr><td></td><td></td><td><h5><?php echo $row_subsystems->txtTaskName?></h5></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><?php } foreach($tasks_query as $row_tasks) { if($row_tasks->intInside == $row_subsystems->intTaskID )//if the task is the subtask of a subsystem { if($row_tasks->intSummary == 0)//if the task is an executable task & doesn't have any subtask: { $query_team_user_id = $this->admin_in_out_model->get_user_team_task_query($row_tasks->intTaskID); foreach($query_team_user_id as $row_teamid) {?> <tr><?php $query_teamname = $this->admin_in_out_model->get_team_name($row_teamid->intTeamID); $query_fn_ln = $this->admin_in_out_model->get_fn_ln_from_userid($row_teamid->intUserID); foreach($query_teamname as $row_teamname) {?> <td></td><td></td><td><b><?php echo $row_tasks->txtTaskName;?></b></td> <td><b><font color='#F33558'><?php echo $row_teamname->txtTeamName;?></font></b></td><?php } foreach($query_fn_ln as $row_f_l_name) {?> <td><?php echo $row_f_l_name->txtFirstname." ".$row_f_l_name->txtLastname;?></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><?php }?> </tr><?php } } } } } } } } }?> and in controller i have $projects_query = $this->admin_in_out_model->get_projects(); $tasks_query = $this->admin_in_out_model->get_systems(); $userteamtask = $this->admin_in_out_model->get_user_team_task(); $data['tasks_query'] = $tasks_query; $data['projects_query'] = $projects_query; $this->load->view('project_view',$data); but as you see I'm calling my model functions within the view how can i do something else to do this i mean not calling my model function in my view I have to add that, my model function have parameters these are the model functions: function get_projects() { $this -> db -> select('*'); $this -> db -> from('tbl_task'); $this -> db -> where('intInside','0'); $query = $this->db->get(); return $query->result(); } function get_systems() { $this -> db -> select('*'); $this -> db -> from('tbl_task '); $this -> db -> where('intInside <> ','0'); $query = $this->db->get(); return $query->result(); } function get_user_team_task_query($task_id)//gets information from tbl_userteamtask where the field intTaskID is equal to $task_id { $this -> db -> select('*'); $this -> db -> from('tbl_userteamtask'); $this -> db -> where('intTaskID',$task_id); $query_teamid = $this->db->get(); return $query_teamid->result(); } function get_user_team_task()//gets information from tbl_userteamtask where the field intTaskID is equal to $task_id { $this -> db -> select('*'); $this -> db -> from('tbl_userteamtask'); // $this -> db -> where('intTaskID',$task_id); $query_teamid = $this->db->get(); return $query_teamid->result(); } function get_team_name($query_teamid) { $this -> db -> select('*'); $this -> db -> from('tbl_team'); $this -> db -> where('intTeamID',$query_teamid); $query_teamname = $this->db->get(); return $query_teamname->result(); } function get_user_name($query_userid) { $this -> db -> select('*'); $this -> db -> from('tbl_user'); $this -> db -> where('intUserID',$query_userid); $query_username = $this->db->get(); return $query_username->result(); } function get_fn_ln_from_userid($selected_id) { $this -> db -> select('tbl_user.intUserID, tbl_user.intPersonID,tbl_person.intPersonID,tbl_person.txtFirstname, tbl_person.txtLastname'); $this -> db -> from('tbl_user , tbl_person'); $where = "tbl_user.intPersonID = tbl_person.intPersonID "; $this -> db -> where($where); $this -> db -> where('tbl_user.intUserID', $selected_id); $query = $this -> db -> get();//makes query from DB return $query->result(); } do I have to use subquery ? is this true? i mean can i do this? foreach( $data as $key => $each ) { $data[$key]['team_id'] = $this->get_user_team_task_query( $each['intTaskID'] ); foreach($data[$key]['team_id'] as $key_teamname => $each) { $data[$key_teamname]['team_name'] = $this->get_team_name( $each['intTeamID'] ); } } the model code: foreach( $data as $key => $each ) { $data[$key]['intTaskID'] = $each['intTaskID']; $data[$key]['team_id'] = $this->get_user_team_task_query( $each['intTaskID'] ); foreach($data[$key]['team_id'] as $key => $each) { $data[$key]['team_name'] = $this->get_team_name( $each['intTeamID'] ); #fetching of the teamname and saving in the array $data[$key]['user_name'] = $this->get_fn_ln_from_userid( $each['intUserID'] ); foreach($data[$key]['user_name'] as $key => $each) { $data[$key]['first_name'] = $each['txtFirstname'] ; $data[$key]['last_name'] = $each['txtLastname'] ; } $data[$key]['first_name'] = $data[$key]['first_name']; $data[$key]['last_name'] = $data[$key]['last_name']; } }

    Read the article

  • Parallelism in .NET – Part 8, PLINQ’s ForAll Method

    - by Reed
    Parallel LINQ extends LINQ to Objects, and is typically very similar.  However, as I previously discussed, there are some differences.  Although the standard way to handle simple Data Parellelism is via Parallel.ForEach, it’s possible to do the same thing via PLINQ. PLINQ adds a new method unavailable in standard LINQ which provides new functionality… LINQ is designed to provide a much simpler way of handling querying, including filtering, ordering, grouping, and many other benefits.  Reading the description in LINQ to Objects on MSDN, it becomes clear that the thinking behind LINQ deals with retrieval of data.  LINQ works by adding a functional programming style on top of .NET, allowing us to express filters in terms of predicate functions, for example. PLINQ is, generally, very similar.  Typically, when using PLINQ, we write declarative statements to filter a dataset or perform an aggregation.  However, PLINQ adds one new method, which provides a very different purpose: ForAll. The ForAll method is defined on ParallelEnumerable, and will work upon any ParallelQuery<T>.  Unlike the sequence operators in LINQ and PLINQ, ForAll is intended to cause side effects.  It does not filter a collection, but rather invokes an action on each element of the collection. At first glance, this seems like a bad idea.  For example, Eric Lippert clearly explained two philosophical objections to providing an IEnumerable<T>.ForEach extension method, one of which still applies when parallelized.  The sole purpose of this method is to cause side effects, and as such, I agree that the ForAll method “violates the functional programming principles that all the other sequence operators are based upon”, in exactly the same manner an IEnumerable<T>.ForEach extension method would violate these principles.  Eric Lippert’s second reason for disliking a ForEach extension method does not necessarily apply to ForAll – replacing ForAll with a call to Parallel.ForEach has the same closure semantics, so there is no loss there. Although ForAll may have philosophical issues, there is a pragmatic reason to include this method.  Without ForAll, we would take a fairly serious performance hit in many situations.  Often, we need to perform some filtering or grouping, then perform an action using the results of our filter.  Using a standard foreach statement to perform our action would avoid this philosophical issue: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action foreach (var item in filteredItems) { // These will now run serially item.DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This would cause a loss in performance, since we lose any parallelism in place, and cause all of our actions to be run serially. We could easily use a Parallel.ForEach instead, which adds parallelism to the actions: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action once the filter completes Parallel.ForEach(filteredItems, item => { // These will now run in parallel item.DoSomething(); }); This is a noticeable improvement, since both our filtering and our actions run parallelized.  However, there is still a large bottleneck in place here.  The problem lies with my comment “perform an action once the filter completes”.  Here, we’re parallelizing the filter, then collecting all of the results, blocking until the filter completes.  Once the filtering of every element is completed, we then repartition the results of the filter, reschedule into multiple threads, and perform the action on each element.  By moving this into two separate statements, we potentially double our parallelization overhead, since we’re forcing the work to be partitioned and scheduled twice as many times. This is where the pragmatism comes into play.  By violating our functional principles, we gain the ability to avoid the overhead and cost of rescheduling the work: // Perform an action on the results of our filter collection .AsParallel() .Where( i => i.SomePredicate() ) .ForAll( i => i.DoSomething() ); The ability to avoid the scheduling overhead is a compelling reason to use ForAll.  This really goes back to one of the key points I discussed in data parallelism: Partition your problem in a way to place the most work possible into each task.  Here, this means leaving the statement attached to the expression, even though it causes side effects and is not standard usage for LINQ. This leads to my one guideline for using ForAll: The ForAll extension method should only be used to process the results of a parallel query, as returned by a PLINQ expression. Any other usage scenario should use Parallel.ForEach, instead.

    Read the article

  • How can I use multi-threading with a "for" or "foreach" loop?

    - by saafh
    I am trying to run the for loop in a separate thread so that the UI should be responsive and the progress bar is visible. The problem is that I don't know how to do that :). In this code, the process starts in a separate thread, but the next part of the code is executed at the same time. The messageBox is displayed and the results are never returned (e.g. the listbox's selected index property is never set). It doesn't work even if I use, "taskEx.delay()". TaskEx.Run(() => { for (int i = 0; i < sResults.Count(); i++) { if (sResults.ElementAt(i).DisplayIndexForSearchListBox.Trim().Contains(ayaStr)) { lstGoto.SelectedIndex = i; lstGoto_SelectionChanged(lstReadingSearchResults, null); IsIndexMatched = true; break; } } }); //TaskEx.delay(1000); if (IsIndexMatched == true) stkPanelGoto.Visibility = Visibility.Collapsed; else //the index didn't match { MessagePrompt.ShowMessage("The test'" + ayaStr + "' does not exist.", "Warning!"); } Could anyone please tell me how can I use multi-threading with a "for" or "foreach" loop?

    Read the article

  • How would i stop this foreach loop after 3 iterations?

    - by Tapha
    Here is the loop. foreach($results->results as $result){ echo '<div id="twitter_status">'; echo '<img src="'.$result->profile_image_url.'" class="twitter_image">'; $text_n = $result->text; echo "<div id='text_twit'>".$text_n."</div>"; echo '<div id="twitter_small">'; echo "<span id='link_user'".'<a href="http://www.twitter.com/'.$result->from_user.'">'.$result->from_user.'</a></span>'; $date = $result->created_at; $dateFormat = new DateIntervalFormat(); $time = strtotime($result->created_at); echo "<div class='time'>"; print sprintf('Submitted %s ago', $dateFormat->getInterval($time)); echo '</div>'; echo "</div>"; echo "</div>"; Thanks!!!! alot.

    Read the article

  • Deleted array value still showing up on foreach loop in AS3 (bug in flash?)

    - by nexus
    It took me many hours to narrow down a problem in some code to this reproducible error, which seems to me like a bug in AVM2. Can anyone shed light on why this is occurring or how to fix it? When the value at index 1 is deleted and a value is subsequently set at index 0, the non-existent (undefined) value at index 1 will now show up in a foreach loop. I have only been able to produce this outcome with index 1 and 0 (not any other n and n-1). Run this code: package { import flash.display.Sprite; public class Main extends Sprite { public function Main():void { var bar : Array = new Array(6); out(bar); //proper behavior trace("bar[1] = 1", bar[1] = 1); out(bar); //proper behavior trace("delete bar[1]", delete bar[1]); out(bar); //proper behavior trace("bar[4] = 4", bar[4] = 4); out(bar); //for each loop will now iterate over the undefined position at index 1 trace("bar[0] = 0", bar[0] = 0); out(bar); trace("bar[3] = 3", bar[3] = 3); out(bar); } private function out(bar:Array):void { trace(bar); for each(var i : * in bar) { trace(i); } } } } It will give this output: ,,,,, bar[1] = 1 1 ,1,,,, 1 delete bar[1] true ,,,,, bar[4] = 4 4 ,,,,4, 4 bar[0] = 0 0 0,,,,4, 0 undefined 4 bar[3] = 3 3 0,,,3,4, 0 undefined 4 3

    Read the article

  • Generic class for performing mass-parallel queries. Feedback?

    - by Aaron
    I don't understand why, but there appears to be no mechanism in the client library for performing many queries in parallel for Windows Azure Table Storage. I've created a template class that can be used to save considerable time, and you're welcome to use it however you wish. I would appreciate however, if you could pick it apart, and provide feedback on how to improve this class. public class AsyncDataQuery<T> where T: new() { public AsyncDataQuery(bool preserve_order) { m_preserve_order = preserve_order; this.Queries = new List<CloudTableQuery<T>>(1000); } public void AddQuery(IQueryable<T> query) { var data_query = (DataServiceQuery<T>)query; var uri = data_query.RequestUri; // required this.Queries.Add(new CloudTableQuery<T>(data_query)); } /// <summary> /// Blocking but still optimized. /// </summary> public List<T> Execute() { this.BeginAsync(); return this.EndAsync(); } public void BeginAsync() { if (m_preserve_order == true) { this.Items = new List<T>(Queries.Count); for (var i = 0; i < Queries.Count; i++) { this.Items.Add(new T()); } } else { this.Items = new List<T>(Queries.Count * 2); } m_wait = new ManualResetEvent(false); for (var i = 0; i < Queries.Count; i++) { var query = Queries[i]; query.BeginExecuteSegmented(callback, i); } } public List<T> EndAsync() { m_wait.WaitOne(); return this.Items; } private List<T> Items { get; set; } private List<CloudTableQuery<T>> Queries { get; set; } private bool m_preserve_order; private ManualResetEvent m_wait; private int m_completed = 0; private void callback(IAsyncResult ar) { int i = (int)ar.AsyncState; CloudTableQuery<T> query = Queries[i]; var response = query.EndExecuteSegmented(ar); if (m_preserve_order == true) { // preserve ordering only supports one result per query this.Items[i] = response.Results.First(); } else { // add any number of items this.Items.AddRange(response.Results); } if (response.HasMoreResults == true) { // more data to pull query.BeginExecuteSegmented(response.ContinuationToken, callback, i); return; } m_completed = Interlocked.Increment(ref m_completed); if (m_completed == Queries.Count) { m_wait.Set(); } } }

    Read the article

  • Performance due to entity update

    - by Rizzo
    I always think about 2 ways to code the global Step() function, both with pros and cons. Please note that AIStep is just to provide another more step for whoever who wants it. // Approach 1 step foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_fixed_step ) entity.FixedStep(); if( time_for_AI_step ) entity.AIStep(); ... // all kind of updates you want } PRO: you just have to iterate once over all entities. CON: fidelity could be lower at some scenarios, since the entity.FixedStep() isn't going all at a time. // Approach 2 step foreach( entity in entities ) entity.DeltaStep( delta_time ); if( time_for_fixed_step ) foreach( entity in entities ) entity.FixedStep(); if( time_for_AI_step ) foreach( entity in entities ) entity.FixedStep(); // all kind of updates you want SEPARATED PRO: fidelity on FixedStep is higher, shouldn't be much time between all entities update, rather than Approach 1 where you may have to wait other updates until FixedStep() comes. CON: you iterate once for each kind of update. Also, a third approach could be a hybrid between both of them, something in the way of foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_AI_step ) entity.AIStep(); // all kind of updates you want BUT FixedStep() } if( time_for_fixed_step ) { foreach( entity in entities ) { entity.FixedStep(); } } Just two loops, don't caring about time fidelity in nothing other than at FixedStep(). Any thoughts on this matter? Should it really matters to make all steps at once or am I thinking on problems that don't exist?

    Read the article

  • Determinism in multiplayer simulation with Box2D, and single computer

    - by Jake
    I wrote a small test car driving multiplayer game with Box2D using TCP server-client communcations. I ran 1 instance of server.exe and 2 instance of client.exe on the same machine that I code and compile the executables. I type inputs (WASD for a simple car movement) into one of the 2 clients and I can get both clients to update the simulation. There are 2 cars in the simulation. As long as the cars do not collide, I get the same identical output on both client.exe. I can run the car(s) around for as long as I could they still update the same. However, if I start to collide the cars, very quickly they go out of sync. My tools: Windows 7, C++, MSVS 2010, Box2D, freeGlut. My Psuedocode: // client.exe void timer(int value) { tcpServer.send(my_inputs); foreach(i = player including myself) inputs[i] = tcpServer.receive(); foreach(i = player including myself) players[i].process(inputs[i]); myb2World.step(33, 8, 6); // Box2D world step simulation foreach(i = player including myself) renderer.render(player[i]); glutTimerFunc(33, timer, 0); } // server.exe void serviceloop { while(all clients alive) { foreach(c = clients) tcpClients[c].receive(&inputs[c]); // send input of each client to all clients foreach(source = clients) { foreach(dest = clients) { tcpClients[dest].send(inputs[source]); } } } } I have read all over the internet and SE the following claims (paraphrased): Box2D is deterministic as long as floating point architecture/implementation is the same. (For any deterministic engine) Determinism is gauranteed if playback of recorded inputs is on the same machine with exe compiled using same compiler and machine. Additionally my server.exe and client.exe gameloop is single thread with blocking socket calls and fixed time step. Question: Can anyone explain what I did wrong to get different Box2D output?

    Read the article

  • Template syntax for users - is there a right way to do it?

    - by RickM
    Ok, I'm in the middle of building a saas system, and as part of that, the hosted clients need to be able to edit certain layout templates, baqsically just html, css and javascript files. I'm obviously going to be wanting to use a template syntax here as it would be dumb to let people execute PHP code, so in this instance template syntax does need to be used. I know that in the grand scale of things, this is a very minor thing, but what template syntax do you use, and why? Is there one that's considered better than others? I've seen all sorts being used with no real consistency, for example: Smarty Style: {$someVar} {foreach from="foo" item="bar"} {$bar.food} {/foreach} ASP Style: {% someVar %} {% foreach foo as bar %} {% bar.food %} {% endforeach %} HTML Style: <someVar> <foreach from="foo" item="bar"> <bar:food> </foreach> PyroCMS/FuelPHP "LEX" Style: {{ someVar }} {{ foreach from="foo" item="bar" }} {{ bar:food }} {{ endforeach }} Obviously these arent 100% accurate (for example, LEX is used alongside PHP for loops), and are only to give you an example of what I mean. What, in your opinion would be the best one (if any) to go with. I ask this bearing in mind that people using this are likely to be novice users. I did look around at a bunch of hosted CMS and E-Commerce systems as these seem to make use of user-editable templates, and most seem to be using some form of their own syntax. I should note that whatever style I end up going with, it will be with a custom template handler due to the complexity of the system and how template files are stored. Plus I'd not want to touch the likes of Smarty with a barge pole!

    Read the article

  • Programação paralela no .NET Framework 4 – Parte I

    - by anobre
    Introdução O avanço de tecnologia nos últimos anos forneceu, a baixo custo, acesso  a workstations com inúmeros CPUs. Facilmente encontramos hoje máquinas clientes com 2, 4 e até 8 núcleos, sem considerar os “super-servidores” com até 36 processadores :) Da wikipedia: A Unidade central de processamento (CPU, de acordo com as iniciais em inglês) ou o processador é a parte de um sistema de computador que executa as instruções de um programa de computador, e é o elemento primordial na execução das funções de um computador. Este termo tem sido usado na indústria de computadores pelo menos desde o início dos anos 1960[1]. A forma, desenho e implementação de CPUs têm mudado dramaticamente desde os primeiros exemplos, mas o seu funcionamento fundamental permanece o mesmo. Fazendo uma analogia, seria muito interessante delegarmos tarefas no mundo real que podem ser executadas independentemente a pessoas diferentes, atingindo desta forma uma  maior performance / produtividade na sua execução. A computação paralela se baseia na idéia que um problema maior pode ser dividido em problemas menores, sendo resolvidos de forma paralela. Este pensamento é utilizado há algum tempo por HPC (High-performance computing), e através das facilidades dos últimos anos, assim como a preocupação com consumo de energia, tornaram esta idéia mais atrativa e de fácil acesso a qualquer ambiente. No .NET Framework A plataforma .NET apresenta um runtime, bibliotecas e ferramentas para fornecer uma base de acesso fácil e rápido à programação paralela, sem trabalhar diretamente com threads e thread pool. Esta série de posts irá apresentar todos os recursos disponíveis, iniciando os estudos pela TPL, ou Task Parallel Library. Task Parallel Library A TPL é um conjunto de tipos localizados no namespace System.Threading e System.Threading.Tasks, a partir da versão 4 do framework. A partir da versão 4 do framework, o TPL é a maneira recomendada para escrever código paralelo e multithreaded. http://msdn.microsoft.com/en-us/library/dd460717(v=VS.100).aspx Task Parallelism O termo “task parallelism”, ou em uma tradução live paralelismo de tarefas, se refere a uma ou mais tarefas sendo executadas de forma simultanea. Considere uma tarefa como um método. A maneira mais fácil de executar tarefas de forma paralela é o código abaixo: Parallel.Invoke(() => TrabalhoInicial(), () => TrabalhoSeguinte()); O que acontece de verdade? Por trás nos panos, esta instrução instancia de forma implícita objetos do tipo Task, responsável por representar uma operação assíncrona, não exatamente paralela: public class Task : IAsyncResult, IDisposable É possível instanciar Tasks de forma explícita, sendo uma alternativa mais complexa ao Parallel.Invoke. var task = new Task(() => TrabalhoInicial()); task.Start(); Outra opção de instanciar uma Task e já executar sua tarefa é: var t = Task<int>.Factory.StartNew(() => TrabalhoInicialComValor());var t2 = Task<int>.Factory.StartNew(() => TrabalhoSeguinteComValor()); A diferença básica entre as duas abordagens é que a primeira tem início conhecido, mais utilizado quando não queremos que a instanciação e o agendamento da execução ocorra em uma só operação, como na segunda abordagem. Data Parallelism Ainda parte da TPL, o Data Parallelism se refere a cenários onde a mesma operação deva ser executada paralelamente em elementos de uma coleção ou array, através de instruções paralelas For e ForEach. A idéia básica é pegar cada elemento da coleção (ou array) e trabalhar com diversas threads concomitantemente. A classe-chave para este cenário é a System.Threading.Tasks.Parallel // Sequential version foreach (var item in sourceCollection) { Process(item); } // Parallel equivalent Parallel.ForEach(sourceCollection, item => Process(item)); Complicado né? :) Demonstração Acesse aqui um vídeo com exemplos (screencast). Cuidado! Apesar da imensa vontade de sair codificando, tome cuidado com alguns problemas básicos de paralelismo. Neste link é possível conhecer algumas situações. Abraços.

    Read the article

  • Best of "The Moth" 2009

    Not wanting to break the tradition (2004, 2005, 2006, 2007, 2008) below are some blog posts I picked from my blogging last year. As you can see by comparing with the links above, 2009 marks my lowest output yet with only 64 posts, but hopefully the quality has not been lowered ;-) 1. Parallel Computing was a strong focus of course. You can find links to most of that content aggregated in the post where I shared my entire parallelism session. Related to that was the link to the screencast I shared of the Parallel Computing Features Tour.2. Parallel Debugging is obviously part of the parallel computing links above, but I created more in depth content around that area of Visual Studio 2010 since it is the one I directly own. I aggregated all the links to that content in my post: Parallel Debugging.3. High Performance Computing through clusters is an area I'll be focusing more next year (besides parallelism on a single node on the client captured above) and I started introducing the topic on my blog this year. Read the (currently) 6 posts bottom up from my category on HPC.4. Windows 7 Task Manager. In April I shared a screenshot which was the most "borrowed" item from my blog (I should have watermarked it ;-)5. Windows Phone non-support in VS2010. Did my bit to spread clarification of the story.6. Window positions in Visual Studio is a long post, but one that I strongly advise all VS users to read and benefit from.7. Bug Triage gives you a glimpse into one thing all (Microsoft) product teams do.If you haven't yet, you can subscribe via one of the options on the left. Either way, thank you for staying tuned… Happy New Year! Comments about this post welcome at the original blog.

    Read the article

  • Microsoft Technical Computing

    - by Daniel Moth
    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: "… As we build the Technical Computing initiative, we will invest in three core areas: 1. Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. 2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today’s modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud. 3. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. …" Our Parallel Computing Platform team is directly responsible for item #2, and we work very closely with the teams delivering items #1 and #3. At the same time as the exec email, our marketing team unveiled a website with interviews that I invite you to check out: Modeling the World. Comments about this post welcome at the original blog.

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >