Search Results

Search found 4174 results on 167 pages for 'tasks'.

Page 31/167 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Custom array sort in perl

    - by ABach
    I have a perl array of to-do tasks that looks like this: @todos = ( "1 (A) Complete online final @evm4700 t:2010-06-02", "3 Write thank-you t:2010-06-10", "4 (B) Clean t:2010-05-30", "5 Donate to LSF t:2010-06-02", "6 (A) t:2010-05-30 Pick up dry cleaning", "2 (C) Call Chris Johnson t:2010-06-01" ); That first number is the task's ID. If a task has ([A-Z]) next to, that defines the task's priority. What I want to do is sort the tasks array in a way that places the prioritized items first (and in order): @todos = ( "1 (A) Complete online final @evm4700 t:2010-06-02", "6 (A) t:2010-05-30 Pick up dry cleaning", "4 (B) Clean t:2010-05-30", "2 (C) Call Chris Johnson t:2010-06-01" "3 Write thank-you t:2010-06-10", "5 Donate to LSF t:2010-06-02", ); I cannot use a regular sort() because of those IDs next to the tasks, so I'm assuming that some sort of customized sorting subroutine is needed. However, my knowledge of how to do this efficiently in perl is minimal. Thanks, all.

    Read the article

  • 'foreach' failing when using Parallel Task Library

    - by Chris Arnold
    The following code creates the correct number of files, but every file contains the contents of the first list. Can anyone spot what I've done wrong please? private IList<List<string>> GetLists() { // Code omitted for brevity... } private void DoSomethingInParallel() { var lists = GetLists(); var tasks = new List<Task>(); var factory = new TaskFactory(); foreach (var list in lists) { tasks.Add(factory.StartNew(() => { WriteListToLogFile(list); })); } Task.WaitAll(tasks.ToArray()); }

    Read the article

  • SQL Server 2008 Table Maintenance - Rebuild, Reorganize, Update Stats, Check Integrity etc HELP!

    - by Albert
    I'm migrating a ~15GB database from SQL Server 2005 to a new server running SQL Server 2008, and along with that I need to create all the new Maintenance Plans. I can take care of all the backup stuff, but the table maintenance baffles me some. Does anyone have any input on how often I should (or how often you do would suffice too) the following tasks? Check Database Integrity Rebuild Indexes Reorganize Indexes Update Statistics Shrink Database? Am I missing anything? Again if you can share how often you do these tasks that would be great...and/or share any general information about your approach to table maintenance that would be helpful. Lastly does it matter what order I run these tasks in (when setting up a job)?

    Read the article

  • Custom validation works in development but not in unit test

    - by Geolev
    I want to validate that at least one of two columns have a value in my model. I found somewhere on the web that I could create a custom validator as follows: # Check for the presence of one or another field: # :validates_presence_of_at_least_one_field :last_name, :company_name - would require either last_name or company_name to be filled in # also works with arrays # :validates_presence_of_at_least_one_field :email, [:name, :address, :city, :state] - would require email or a mailing type address module ActiveRecord module Validations module ClassMethods def validates_presence_of_at_least_one_field(*attr_names) msg = attr_names.collect {|a| a.is_a?(Array) ? " ( #{a.join(", ")} ) " : a.to_s}.join(", ") + "can't all be blank. At least one field must be filled in." configuration = { :on => :save, :message => msg } configuration.update(attr_names.extract_options!) send(validation_method(configuration[:on]), configuration) do |record| found = false attr_names.each do |a| a = [a] unless a.is_a?(Array) found = true a.each do |attr| value = record.respond_to?(attr.to_s) ? record.send(attr.to_s) : record[attr.to_s] found = !value.blank? end break if found end record.errors.add_to_base(configuration[:message]) unless found end end end end end I put this in a file called lib/acs_validator.rb in my project and added "require 'acs_validator'" to my environment.rb. This does exactly what I want. It works perfectly when I manually test it in the development environment but when I write a unit test it breaks my test environment. This is my unit test: require 'test_helper' class CustomerTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "customer not valid" do puts "customer not valid" customer = Customer.new assert !customer.valid? assert customer.errors.invalid?(:subdomain) assert_equal "Company Name and Last Name can't both be blank.", customer.errors.on(:contact_lname) end end This is my model: class Customer < ActiveRecord::Base validates_presence_of :subdomain validates_presence_of_at_least_one_field :customer_company_name, :contact_lname, :message => "Company Name and Last Name can't both be blank." has_one :service_plan end When I run the unit test, I get the following error: DEPRECATION WARNING: Rake tasks in vendor/plugins/admin_data/tasks, vendor/plugins/admin_data/tasks, and vendor/plugins/admin_data/tasks are deprecated. Use lib/tasks instead. (called from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) Couldn't drop acs_test : #<ActiveRecord::StatementInvalid: PGError: ERROR: database "acs_test" is being accessed by other users DETAIL: There are 1 other session(s) using the database. : DROP DATABASE IF EXISTS "acs_test"> acs_test already exists NOTICE: CREATE TABLE will create implicit sequence "customers_id_seq" for serial column "customers.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "customers_pkey" for table "customers" NOTICE: CREATE TABLE will create implicit sequence "service_plans_id_seq" for serial column "service_plans.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "service_plans_pkey" for table "service_plans" /usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/customer_test.rb" "test/unit/service_plan_test.rb" "test/unit/helpers/dashboard_helper_test.rb" "test/unit/helpers/customers_helper_test.rb" "test/unit/helpers/service_plans_helper_test.rb" /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1994:in `method_missing_without_paginate': undefined method `validates_presence_of_at_least_one_field' for #<Class:0xb7076bd0> (NoMethodError) from /usr/lib/ruby/gems/1.8/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:170:in `method_missing' from /home/george/projects/advancedcomfortcs/app/models/customer.rb:3 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:158:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:224:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /home/george/projects/advancedcomfortcs/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 from ./test/unit/customer_test.rb:1:in `require' from ./test/unit/customer_test.rb:1 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `load' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `each' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ru...] (See full trace by running task with --trace) It seems to have stepped on will_paginate somehow. Does anyone have any suggestions? Is there another way to do the validation I'm attempting to do? Thanks, George

    Read the article

  • Deleting SQLite rows automatically/periodically

    - by anselmophil
    Hi guys! So, i have a application that displays tasks and i created an option in the Settings menu that lets you choose in how many days all (Never, 10 days, 20 days or 30 days) these tasks should be automatically deleted. So when i open up the app there will be a method that will be called and check if theres any tasks to be deleted. I did that to challenge myself, but i'm hitting the wall with my head and i cant come up with something. So here i am in need of help! A little more information: i was trying to do something with java/android code, but from past searches i saw people dealing with it using SQL functions. Is it better? Thanks in advance guys!

    Read the article

  • Rails: restful authentication setup help

    - by SuperString
    Hi I downloaded the plugin from http://github.com/techweenie/restful-authentication.git Then I run rails generate plugin authenticated user session This is the result I got: create vendor/plugins/authenticated create vendor/plugins/authenticated/MIT-LICENSE create vendor/plugins/authenticated/README create vendor/plugins/authenticated/Rakefile create vendor/plugins/authenticated/init.rb create vendor/plugins/authenticated/install.rb create vendor/plugins/authenticated/uninstall.rb create vendor/plugins/authenticated/lib create vendor/plugins/authenticated/lib/authenticated.rb invoke test_unit inside vendor/plugins/authenticated create test create test/authenticated_test.rb create test/test_helper.rb Then I tried to do rake db:migrate But I got error that says rake tasks in restful-authentication/tasks/auth.rake are deprecated. Use lib/tasks instead. I am new to rails, tried looking online but things seem to be outdated. Please help!

    Read the article

  • Why is sql server giving a conversion error when submitting date.today to a datetime column?

    - by kpierce8
    I am getting a conversion error every time I try to submit a date value to sql server. The column in sql server is a datetime and in vb I'm using Date.today to pass to my parameterized query. I keep getting a sql exception Conversion failed when converting datetime from character string. Here's the code Public Sub ResetOrder(ByVal connectionString As String) Dim strSQL As String Dim cn As New SqlConnection(connectionString) cn.Open() strSQL = "DELETE Tasks WHERE ProjID = @ProjectID" Dim cmd As New SqlCommand(strSQL, cn) cmd.Parameters.AddWithValue("ProjectID", 5) cmd.ExecuteNonQuery() strSQL = "INSERT INTO Tasks (ProjID, DueDate, TaskName) VALUES " & _ " (@ProjID, @TaskName, @DueDate)" Dim cmd2 As New SqlCommand(strSQL, cn) cmd2.CommandText = strSQL cmd2.Parameters.AddWithValue("ProjID", 5) cmd2.Parameters.AddWithValue("DueDate", Date.Today) cmd2.Parameters.AddWithValue("TaskName", "bob") cmd2.ExecuteNonQuery() cn.Close() DataGridView1.DataSource = ds.Projects DataGridView2.DataSource = ds.Tasks End Sub Any thoughts would be greatly appreciated.

    Read the article

  • Nested ng repeat not working

    - by Rodrigo Fonseca
    Ok, i've two ng-repeat working here, the first(board in boards) is working good, no problem, but the second(task in tasks), when i try to get "{{task.title}}" i don't get anything but i can display all the style from it... here is my model: $scope.tasks = [{boardIndex: 0, title: "test"}, {boardIndex: 1, title: "test2"}]; Here is my code(it's in jade, ok?) section(data-ng-repeat="board in boards", ng-cloak) .board_header div(data-ng-controller="AddTaskBtnController") i.add_task_btn.fa.fa-plus-square-o.fa-2x(ng-click='setSelected(board.id)', ng-class="{icon_add_hover: isSelected(board.id)}") h2(data-ng-bind="board.title") .content_board .task(data-ng-repeat="task in tasks", data-ng-if="task.boardIndex == board.id", data-ng-controller='TaskController', data-ng-hide='modeTask', data-ng-init='setTaskId()') .user_icon_task i.fa.fa-user.fa-3x.icon-user-not-selected .quest_task .puzzle_task(data-ng-hide='modeTask') i.fa.fa-check-circle-o.fa-lg h2 {{task.title}} ul.icons_details_task_wrapper li i.fa.fa-check-circle-o span.icon_counter li.pull_left i.fa.fa-puzzle-piece span.icon_counter ul.task_details_wrapper li.task_priority(data-ng-show='goal.selectedDrawAttention', data-ng-click='toggleSelected()', data-ng-class='{draw_attention_selected: goal.selectedDrawAttention }', style='cursor: inherit;') i.fa.fa-eye li.task_priority i.fa .task_time_ago span(am-time-ago='message.time')

    Read the article

  • Is Work Stealing always the most appropriate user-level thread scheduling algorithm?

    - by Il-Bhima
    I've been investigating different scheduling algorithms for a thread pool I am implementing. Due to the nature of the problem I am solving I can assume that the tasks being run in parallel are independent and do not spawn any new tasks. The tasks can be of varying sizes. I went immediately for the most popular scheduling algorithm "work stealing" using lock-free deques for the local job queues, and I am relatively happy with this approach. However I'm wondering whether there are any common cases where work-stealing is not the best approach. For this particular problem I have a good estimate of the size of each individual task. Work-stealing does not make use of this information and I'm wondering if there is any scheduler which will give better load-balancing than work-stealing with this information (obviously with the same efficiency). NB. This question ties up with a previous question.

    Read the article

  • How to implement cancellable worker thread

    - by Arnold Zokas
    Hi, I'm trying to implement a cancellable worker thread using the new threading constructs in System.Threading.Tasks namespace. So far I have have come up with this implementation: public sealed class Scheduler { private CancellationTokenSource _cancellationTokenSource; public System.Threading.Tasks.Task Worker { get; private set; } public void Start() { _cancellationTokenSource = new CancellationTokenSource(); Worker = System.Threading.Tasks.Task.Factory.StartNew( () => RunTasks(_cancellationTokenSource.Token), _cancellationTokenSource.Token ); } private static void RunTasks(CancellationToken cancellationToken) { while (!cancellationToken.IsCancellationRequested) { Thread.Sleep(1000); // simulate work } } public void Stop() { try { _cancellationTokenSource.Cancel(); Worker.Wait(_cancellationTokenSource.Token); } catch (OperationCanceledException) { // OperationCanceledException is expected when a Task is cancelled. } } } When Stop() returns I expect Worker.Status to be TaskStatus.Canceled. My unit tests have shown that under certain conditions Worker.Status remains set to TaskStatus.Running. Is this a correct way to implement a cancellable worker thread?

    Read the article

  • change `dest` option on the fly

    - by Rikard
    I have some grunt tasks to compile files and would like to "recicle" them inside different tasks. I am trying to modify the destination directory without success... My idea is something like: grunt.registerTask('bower', ['compile:index', 'compile:core'], function(){ this.options({dest: 'dist/*.js'}); }); The compile:index task runs good by itself (i.e. when called alone) and has dest: 'index.js, other tasks have other filenames. I would like to change these inside the bowertask, adding a new directory but keeping the filename defined in the original task. Is this possible?

    Read the article

  • Scrum for Team System 2010 - How to use story points instead of Hours

    - by dretzlaff17
    I have installed TFS 2010 using the Scrum for Team System v3. The work item templates want you to enter a Project Backlog Item that includes story points, then you need to add linked tasks as a child of the PBI. It is at the task level where you can assign team individuals, update estimated hours left, etc. What is the importance of the Story Points used at the PBI item if individual tasks are using hours? Has anyone customized this template so that the child work item tasks use story point burn downs instead of hours? Also, I would be nice to have the total number of story points from each individual task roll up into the PBI item as a read only field for total story points. Thanks for your time.

    Read the article

  • What is the "task" in twitter Storm parallelism

    - by John Wang
    I'm trying to learn twitter storm by following the great article "Understanding the parallelism of a Storm topology" However I'm a bit confused by the concept of "task". Is a task an running instance of the component(spout or bolt) ? A executor having multiple tasks actually is saying the same component is executed for multiple times by the executor, am I correct ? Moreover in a general parallelism sense, Storm will spawn a dedicated thread(executor) for a spout or bolt, but what is contributed to the parallelism by an executor(thread) having multiple tasks ? I think having multiple tasks in a thread, since a thread executes sequentially, only make the thread a kind of "cached" resource, which avoids spawning new thread for next task run. Am I correct? I may clear those confusion by myself after taking more time to investigate, but you know, we both love stackoverflow ;-) Thanks in advance.

    Read the article

  • How do you keep track of what you have released in production?

    - by systempuntoout
    Tipically a deploy in production does not involve just a mere source code update (build) but requires a lot of other important tasks like for example: Db scripts (tables, query..) Configuration files (differents from test\production) Batch to schedule Executables to move to the correct path Etc. etc. In our company we just send an email to a "Release email address" describing the tasks in order, which changeset need to be published (TFS), which SP need to be updated, db scripts and so on. I believe there's not a magic tool that does these tasks automagically in order, rollback included; but probably there's something better than email that helps to keep track of releases in production. Do you have any tools to suggest or practices to share?

    Read the article

  • Change password in Task Scheduler in script

    - by titanium
    I'm changing password every month for all scheduled tasks I created in Task Scheduler. This is because our security policy expires our password every month. Due to increasing number of scheduled tasks I'm creating, it eats up a lot of time in just changing password within Task Scheduler. My question is: Is there a way in script to change password in one run specifying the tasks, DOMAIN\username, and password? I know there's a security risk in putting the password in script. The password in script will be removed after the running it.

    Read the article

  • Sprint to the finish: how to keep all team-members busy in the final days of a Scrum sprint?

    - by sdg
    Given that the tasks in a specific sprint will not divide perfectly into the team, and all finish on the same date, what do you do to keep everyone working as the sprint moves into its final stages? Inevitably it seems like there will be one or two people freed-up. If all the other tasks are done-done, and the remaining tasks are already underway, then what? Do those team-members pick up items from the top of the product backlog, as they are likely to be needed in the next sprint anyways to get a head start? What do you or your teams do?

    Read the article

  • Calling software modules (Java, Perl, etc.) from R

    - by harshsinghal
    I've recently started using R for Natural Language Processing tasks and find that a lot of applications are available in Java and Perl (for my purposes). For example: A few perl modules are available to find distance measures between words by querying Wordnet. I am aware of the R Wordnet package, but it does not perform the tasks that these CPAN modules do. Many Java packages for NLP are out there, which I'd like to use from within R. I know of rJava, RSPerl, the simple system command amongst others, but I'd like more examples of how I could make calls to Java and Perl applications from R. Recently I tried capturing console output from a perl script. cat( 'print "Hello World\n";',file="hello.pl" ) system(command="c:\Perl64\bin\perl hello.pl") system(command=paste(Sys.getenv("COMSPEC"),"/c","C:\Perl64\bin\perl hello.pl")) None of the above system commands showed 'Hello World' on the R console. I've used "system" before to run perl scripts to perform tasks without wanting to capture console output. Any hints and redirection to other more extensive sources of information would be highly appreciated. Thank you

    Read the article

  • C# - LINQ Statements with OR clauses

    - by user70192
    Hello, I am trying to use LINQ to return a list of tasks that are in one of three states. These states are: 10 - Completed 11 - Incomplete 12 - Skipped The state is available through a property called "TaskStateID". I can do this in LINQ with just one state as shown here: var filteredTasks = from task in tasks select task; // Do stuff with filtered tasks string selectedComboBoxValue = GetFilterComboBoxValue(); if (selected ComboBoxValue == 3) { filteredTasks = filteredTasks.Where(p => p.TaskStateID == 10); // How do I use an 'OR' here to say p.TaskStateID == 10 OR p.TaskStateID == 11 OR p.TaskStateID == 12 } As shown in the comment above, how do I use an 'OR' in a LINQ statement to say p.TaskStateID == 10 OR p.TaskStateID == 11 OR p.TaskStateID == 12? Thank you

    Read the article

  • why don't more programming languages have builtin interfaces to the window manager?

    - by Naveen
    Programming is at the heart about automating tasks on a computer. Presumably those tasks would normally be done manually by a human. Humans use the computer through the keyboard, mouse, and interaction with the console or the window manager. But very few languages have built in functions that provide an interface to these basic computing objects. A notable exception is autohotkey, an open source language on windows, providing builtin functions that allow the following simple tasks: * Get Pixel Information * Get mouse position * Keyboard macros * Simulate key strokes * Simulate mouse click * Window management See examples on rosettacode. There have been various attempts on linux, many of which were stopped without explanation. One is the inactive tcl library: android. Search google code for android, lang:tcl

    Read the article

  • What's the best way to return a subset of a list

    - by Pikrass
    I have a list of tasks. A task is defined by a name, a due date and a duration. My TaskManager class handles a std::list<Task> sorted by due date. It has to provide a way to get the tasks due for a specific date. How would you implement that ? I think a good way (from API point of view) would be to provide a std::list<Task>::iterator pair. So I would have a TaskManager::begin(date) method. Do you think this method should get the iterator by iterating from the start of the list until it finds the first task due on that date, or by getting it from a std::map<date, std::list<Task>::iterator> (but then we have to keep it up-to-date when adding or removing tasks) ? And then, how could I implement the TaskManager::end(date) method ?

    Read the article

  • Is there a good web-based project management app with scheduling?

    - by Andykiteman
    Ideally something as intuitive as basecamp, with good usability and accessibility. The best I've seen is huddle.net but it's still weak in several areas. Must have: Projects - ability to add people & tasks and schedule tasks to people Calendar - showing when people are busy or available Role based access - Admins and non-admins History - ability to look back at all history Anyone seen a product that's worth a look? Clarification: Must be hosted i.e. not require my own hardware or IT staff I'm looking for an app to schedule people with specific tasks at specific times and monitor the outcomes. I'm already using Mingle (for stories), Basecamp (to run the business) and Exceptional (to track bugs). I'm not looking for a bug-tracking system or a story management application (I already looked at VersionOne, but chose Mingle due to it's nicer UI) My response to the answer being auto-selected: I still don't feel the answer (chosen for me) is the correct one. It's a useful list but little more, and doesn't provide the solution I was seeking.

    Read the article

  • Nested form child only updates if parent changes.

    - by chap
    In this video (10 sec) you can see that the nested attribute is only updated if it's parent model is changed. Using rails 3.0.0.beta and full project is on github. Summary of models and form: class Project < ActiveRecord::Base has_many :tasks accepts_nested_attributes_for :tasks end class Task < ActiveRecord::Base belongs_to :project has_many :assignments accepts_nested_attributes_for :assignments end class Assignment < ActiveRecord::Base belongs_to :task end form_for(@project) do |f| Project: f.text_field :name f.fields_for :tasks do |task_form| Task: task_form.text_field :name task_form.fields_for :assignments do |assignment_form| Assignment: assignment_form.text_field :name end end f.submit end

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >