Search Results

Search found 9296 results on 372 pages for 'scheduled task'.

Page 46/372 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • rake migration aborted: could not find table 'roles'

    - by user464180
    I just inherited code that I'm attempting to run the migrations for but I keep getting a rake aborted error. I've come across others that have what appears to be similar issues, but most involved Heroku and I'm trying to run this locally (to start.) I've tried troubleshooting using both PostgreSQL and SQLite, and both produce the same issue. The table "roles" referenced is the second migration called, so I'm having a hard time figuring out what is causing it to not get built. Any and all assistance is greatly appreciated. Thanks in advance. Here's the roles migration: class CreateRoles < ActiveRecord::Migration def change create_table :roles do |t| t.string :name t.timestamps end end end Here is the trace for SQLite: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! Could not find table 'roles' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/sqlite_adapter.rb:470:in `table_structure' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/sqlite_adapter.rb:351:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/schema_cache.rb:12:in `block in initialize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `yield' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `default' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:248:in `column_names' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:261:in `column_methods_hash' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:69:in `all_attributes_exists?' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:27:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/initializ ers/constants.rb:1:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `block in load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:588:in `block (2 levels) in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `block in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `instance_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:55:in `block in run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:136:in `initialize!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/rail tie/configurable.rb:30:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/environme nt.rb:5:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `block in require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:103:in `require_environment!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:292:in `block (2 levels) in initialize_tasks' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `call' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `block in execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :158:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :176:in `block in invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :157:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :144:in `invoke' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:116:in `invoke_task' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block (2 levels) in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:88:in `top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:66:in `block in run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:63:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/bin/rake:33:in ` <top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `<main>' Tasks: TOP => db:migrate => environment Here is the trace for PostgreSQL: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! PG::Error: ERROR: relation "roles" does not exist LINE 4: WHERE a.attrelid = '"roles"'::regclass ^ : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a .attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"roles"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1106:in `async_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1106:in `exec_no_cache' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:650:in `block in exec_query' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/abstract_adapter.rb:280:in `block in log' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/notifications/instrumenter.rb:20:in `instrument' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/abstract_adapter.rb:275:in `log' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:649:in `exec_query' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1231:in `column_definitions' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:845:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/schema_cache.rb:12:in `block in initialize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `yield' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `default' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:248:in `column_names' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:261:in `column_methods_hash' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:69:in `all_attributes_exists?' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:27:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/initializ ers/constants.rb:1:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `block in load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:588:in `block (2 levels) in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `block in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `instance_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:55:in `block in run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:136:in `initialize!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/rail tie/configurable.rb:30:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/environme nt.rb:5:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `block in require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:103:in `require_environment!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:292:in `block (2 levels) in initialize_tasks' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `call' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `block in execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :158:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :176:in `block in invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :157:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :144:in `invoke' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:116:in `invoke_task' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block (2 levels) in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:88:in `top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:66:in `block in run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:63:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/bin/rake:33:in ` <top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `<main>' Tasks: TOP => db:migrate => environment

    Read the article

  • Ant MXMLC task with arbitrary list of source/lib paths?

    - by sascha
    Does anyone know of a way to use the mxmlc task of the Flex Ant tasks with a user-definable list of source path or library paths? The idea is that the user can define an arbitrary list of source paths and/or library (swc) paths into an Ant properties file and the build file takes these values and evaluates them for use in the mxmlc task. Just wondering if there are any tricks (maybe utilizing filtering/string replacing) to get this working?

    Read the article

  • Non Document Centric SharePoint Workflow

    - by Dan Revell
    SharePoint workflows are document centric in that the base thing the workflow runs on has to be a thing; be it a document or just a list item. The workflow itself is task based, so stuff a user has to do. Now I can put any sort of code in these tasks that I want to and even put complex InfoPath forms in for the user to perform the task. This has been fine on all my previous workflows. But what if I want the tasks to be actual official forms themselves. The item that the workflow runs on is just some abstract concept like an event. An example could be an accident has happened. There isn't an accident form, but a whole set of forms that need to be completed by different people. Task forms aren't really a nice way to go, because it locks all the forms into the task list. You can only access the forms by not deleting the tasks when complete and going to the workflow summery and following the task links to the InfoPath forms or going straight to the tasks list and doing a filter on particular "accidents". These are official documents so ideally there would be a library for each type of document and the workflow would orchestrate the completion of the right forms. It would mean each task would have to create a new blank form and then link the user to that form. The user would go complete the form but then have to go back to the task form and click yes I've completed it until the workflow could progress. Well this is short of the workflow monitoring the forms library form for some completion trigger. But then it all gets messy with the user experience from clicking the link in the task email, to open the Infopath task form, to clicking the link in the subsequent Infopath library form and then return through these forms on completion. It just gets messy trying to retrofit this non document centric sort of workflow into SharePoint. I would really appreciate any input on what might be the best way to do this. Store the forms as task forms Store the forms as library forms and create/link from the task forms Store the forms as different infopath views, and use a forms library. The workflow would trigger variables that progress the view the infopath form shows. Using the same form template for both task forms and a forms library and when a task form is complete, copy the xml into the forms library to have a official record outside of the workflow. Thanks

    Read the article

  • Troubles with list "dropdowns" and which list item gets the dropdown

    - by Andrew
    I'm working on a project for an MMO "guild" that gives members of the guild randomly generated tasks for the game. They can "block" three tasks from being assigned. The lists will look something like this: <ul> <li class="blocked">Task that is blocked</li> <li class="blocked-open">Click to block a task</li> <li class="blocked-open">Click to block a task</li> </ul> The blocked-open class means they haven't chosen a task to block yet. The blocked task means they've already blocked a task. When they click the list item, I want this to appear: <ul class="tasks-dropdown no-display"> <li><h1>Click a Task to Block</h1></li> <ul class="task-dropdown-inner"> <?php //output all tasks foreach($tasks as $task) { echo '<li class="blocked-option"><span id="'.$task.'">'.$task.'</span></li>'; } ?> <br class="clear" /> </ul> </ul> I don't quite know how, when the user clicks the .blocked-open line-item, to show that dropdown under only the one they clicked. My jQuery looked like this before I became confused. $("li.blocked-open").click(function() { $("ul.no-display").slideToggle("900"); }); $(".blocked-option span").click(function() { var task = $(this).attr('id'); alert("You have blocked: " + task); location.reload(true); }); I tested it by putting the dropdown under a line item in the code, and it worked fine, but when I have more than one dropdown in the code, clicking on one line item toggles all the dropdowns. I'm not sure what to do. :-p.

    Read the article

  • How to assign Application Icon that will display in Task bar?

    - by viky
    I am working on a Wpf desktop application, whenever i run my application it shows me a window and associated tab in the task bar(Normal windows feature). My problem is that the tab is using window's icon for unknown file-type, I tried with Icon property of Window, Icon gets assigned but still problem is when I run application, task bar Tab initially displays window's icon for unknown file-type and when window-load completes it changes to the Icon assigned. I want Icon there from beginning. Any help?

    Read the article

  • asp.net Web server control with child controls, event not firing

    - by bleeeah
    I have a simple web control (TaskList) that can have children (Task) which inherit from LinkButton, that can be added declaratively or programatically. This works ok, but I can't get the onclick event of a Task to be fired in my code behind. The code .. [ToolboxData("<{0}:TaskList runat=\"server\"> </{0}:TaskList>")] [ParseChildren(true)] [PersistChildren(false)] public class TaskList : System.Web.UI.Control { //[DefaultProperty("Text")] public TaskList() {} private List<Task> _taskList = new List<Task>(); private string _taskHeading = ""; public string Heading { get { return this._taskHeading; } set { this._taskHeading = value; } } [NotifyParentProperty(true)] [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public List<Task> Tasks { get { return this._taskList; } set { this._taskList = value; } } protected override void CreateChildControls() { foreach (Task task in this._taskList) this.Controls.Add(task); base.CreateChildControls(); } protected override void Render(HtmlTextWriter writer) { writer.Write("<h2>" + this._taskHeading + "</h2>"); writer.Write("<div class='tasks_container'>"); writer.Write("<div class='tasks_list'>"); writer.Write("<ul>"); foreach (Task task in this._taskList) { writer.Write("<li>"); task.RenderControl(writer); writer.Write("</li>"); } writer.Write("</ul>"); writer.Write("</div>"); writer.Write("</div>"); } } public class Task : LinkButton { private string _key = ""; public string Key { get { return this._key; } set { this._key = value; } } } Markup: <rf:TaskList runat="server" ID="tskList" Heading="Tasks"> <Tasks> <rf:Task Key="ba" ID="L1" Text="Helllo" OnClick="task1_Click" runat="server" /> </Tasks> </rf:TaskList> The Onclick event task1_Click never fires when clicked (although a postback occurs).

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • The curious case of SOA Human tasks' automatic completion

    - by Kavitha Srinivasan
    A large south-Asian insurance industry customer using Oracle BPM and SOA ran into this. I have survived this ordeal previously myself but didnt think to blog it then. However, it seems like a good idea to share this knowledge with this reader community and so here goes.. Symptom: A human task (in a SOA/BPEL/BPM process) completes automatically while it should have been assigned to a proper user.There are no stack traces, no related exceptions in the logs. Why: The product is designed to treat human tasks that don't have assignees as one that is eligible for completion. And hence no warning/error messages are recorded in the logs. Usecase variant: A variant of this usecase, where an assignee doesnt exist in the repository is treated as a recoverable error. One can find this in the 'pending recovery' instances in EM and reactivate the task by changing the assignees in the bpm workspace as a process owner /administrator. But back to the usecase when tasks get completed automatically... When: This happens when the users/groups assigned to a task are 'empty' or null. This has been seen only on tasks whose assignees are derived from an assignment expression - ie at runtime an XPath is used to determine who to assign the task to. (This should not happen if task assignees are populated via swim-lane roles.) How to detect this in EM For instances that are auto-completed thus, one will notice in the Audit Trail of such instances, that the 'outcome' of the task is empty. The 'acquired by' element will also show as empty/null. Enabling the oracle.soa.services.workflow.* logger in em should print more verbose messages about this. How to fix this The application code needs two fixes: input to HT: The XSLT/XPath used  to set the task 'assignee' and the process itself should be enhanced to handle nulls better. For eg: if no-data-found, set assignees to alternate value, force default assignees etc. output from HT: Additionally, in the application code, check that the 'outcome' of the HT is not-null. If null, route the task to be performed again after setting the assignee correctly. Beginning PS4FP, one should be able to use 'grab' to route back to the task to fire again. Hope this helps. 

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • Running Teamsite User Admin tool IWUSERADM.exe from ASP.NET

    - by Narendra Tiwari
    It has really been a head scratching task for me. I 've tried many options but nothing worked. Finally I found a workaround on google to achive this by TaskScheduler. PROBLEM When we run Teamsite user administration command line tool IWUSERADM.exe though ASP.Net it gives following error: Application popup: cmd.exe - Application Error : The application failed to initialize properly (0xc0000142). Click on OK to terminate the application. CAUSE No specific cause, it seems to be a bug, supposed to be resolved with this Microsoft patch http://support.microsoft.com/kb/960266. and there is nothing related to permission issue, y web application is impersonated with an administrator account. off course running a bat file from dmin account is a potential secury threat but for this scenario lets conifned our discussion to run the command line tool. RESOLUTION I have not tried this patch as I have not permitted to run this patch on server. Below are the steps to achive the requirement. 1/ Create a batch file which runs the IWUSERADM.exe.         echo - Add Teamsite User    CD E:\Appli\GN00\iw-home\bin    iwuseradm add-user %1 2/ Temporarily create a schedule task and run  the .bat file by scheduled task by ASP.Net code using TaskScheduler http://www.codeproject.com/KB/cs/tsnewlib.aspx. 3/ Here is the function: private int AddTeamsiteUser(string strBatchFilePath, string strUser) { //Get a ScheduledTasks object for the local computer. ScheduledTasks st = new ScheduledTasks(); // Create a task Task t; try{ t = st.CreateTask("~AddTeamsiteUser"); } catch { throw new Exception("Schedule Task ~AddTeamsiteUser already exist."); }    t.SetAccountInformation(yourLogin, yourPassword); //Set the account under which the task should run.  t.Save();  t.Run(); Thread.Sleep(2000); //for sync issue //Remove the scheduled task st.DeleteTask("~AddTeamsiteUser"); return t.ExitCode;   Below are few resources related to the above scenario:- - Task Scheduler Class Library for .NET  http://www.codeproject.com/KB/cs/tsnewlib.aspx - Run a .BAT file from ASP.NET  http://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx - TaskScheduler Class  http://msdn.microsoft.com/en-us/library/system.threading.tasks.taskscheduler.aspx - Application Hangs whle running iwuseradm.exe through ASP.Net  http://bytes.com/topic/asp-net/answers/733098-system-diagnostics-process-hangs     t.ApplicationName = strBatchFilePath; t.Parameters = strUser; t.Comment = "Adding user to Teamsite Application"

    Read the article

  • How to do end task similar to that in Windows?

    - by Rohit Bansal
    Sometimes Ubuntu freezes and I have no other option than to power off system directly. Is there some remedy or way like 'Ctrl + Alt + Del' similar to that in Windows. That would be very helpful..... I need a way out other than option to directly power off my system in times of gross failure. Shutting down this way always creates fear crashing down my system or losing some data which is unacceptable.

    Read the article

  • In this context with views in a tree, which class should perform the task?

    - by Jhonny 8
    Imagine that I have this context: A main view containing a table containing some cells. Each one of them with their own controller and view files. In the main view, I have an object "Person", with 3 different IDs. Depending on certain conditions (let say, time of the day), I have to choose one of them and display it in the cell. My question is, should the main view pass the whole object to the table, and this one to the cell, and the cell will calculate the ID that it will be shown? or, The main view calculates this parameter, and send the result to the table and this to the cell? Is a question focused on OO design, which one of this approaches is more suitable in an OO design and why?

    Read the article

  • Why is purchasing Microsoft licences such a daunting task? [closed]

    - by John Nevermore
    I've spent 2 frustrating days jumping through hoops and browsing through different local e-shops for VS (Visual Studio) 2010 Pro. And WHS (Windows Home Server) FPP 2011 licenses. I found jack .. - or to be more precise, the closest I found in my country was WHS OEM 2011 licenses after multiple emails sent to individuals found on Microsoft partners page. Question being, why is it so difficult to get your hands on Microsoft licenses as an individual? Sure, you can get the latest end user operating systems from most shops, but when it comes to development tools or server software you are left dry. And companies that do sell licenses most of the time don't even put up pricing or a self service environment for buying the licenses, you need to have an hawk's eye for that shiny little Microsoft partner logo and spam through bunch of emails not knowing, if you can count on them to get the license or not. Sure, i could whip out my credit card and buy the VS 2010 license on the online Microsoft Shop. Well whippideegoddamndoo, they sell that, but they don't sell WHS 11 licenses. Why does a company make it so hard to buy their products? Let's not even talk about the licensing itself being a pain.

    Read the article

  • The answer to the unfathomable question: what is meaning of error value 2147943645?

    - by Jim Lahman
    I scheduled a task to perform a windows backup of a single disk on the my server.  When I tested it, the task ran successfully – no problems, no errors; just as I expected.    However, when the task ran as scheduled, it failed with error value 2147943645.  I wondered was this the answer to life, the universe and everything in it?  No.  That is 42.    After doing some research and reviewing the task configuration, I realize that the task will only run if the user of logged on:   So, this was the answer!!  I have to configure the task to run whether the user is logged or not.  Or, else I’ll get that nasty error value.

    Read the article

  • How to deal with fellow programmer who likes to delegate task with lack any support from boss [closed]

    - by Rudy
    I have a problem with my fellow programmer. We are currently working together in a small project that need to be shipped every 2 weeks. She has a tendency to ask for help for every issues that she is facing. Whether it's a compile error, algorithm problem or even sync/merge issue that caused by herself. She does not even bother to check Google or try to find out by herself. I can be asked to help her for 5-10 times a day. Everyday her husband keeps calling (4-6 times a day), and most of the code that has been delivered by her are actually incorrect. Today she framed me for sending the wrong delivery product. She went home after lunch on the delivery day without telling PM and other team member on that day and her code she commited does not work at all. It's not even tested. I have no choice to roll back her code and cleaning her code just for sake to able to run the product. I have warned her about her defective codes for almost 3 iterations. She said when she was not around I should be able to test her module for her. I snapped and yelled that I am not her slave and directly reported to my boss. However, my boss is not a person that can manage and care about software quality. What is the most important thing to my boss is delivery of product, whether it tested or not. He can even asked us to deliver something that not even tested by QA to the client, on the next day. Most of our suggestion is not followed by him. He even asked me to apologize to her because I snapped. I am tired of the whole situation. This kind of thing keeps repeated. I do have saving to be able to survive for 6 months and the idea of resigning is keep haunting. There is nothing else that can be learned in my current job and I had been in a better environment than this. What should I do with the situation?

    Read the article

  • what is the task of a coach in acm programming contests?

    - by Layla
    In the university that I am working they have decided to participate in the ACM regionals for the first time, they would like to appoint me like a coach. I have never been into that situation before and have not found so much information about it, so what is the real work of a coach in those contests? Sometimes I have found experienced programmers like coaches, but others are just people with no so good programming skills; so what is all about?

    Read the article

  • Is this a violation of the Liskov Substitution Principle?

    - by Paul T Davies
    Say we have a list of Task entities, and a ProjectTask sub type. Tasks can be closed at any time, except ProjectTasks which cannot be closed once they have a status of Started. The UI should ensure the option to close a started ProjectTask is never available, but some safeguards are present in the domain: public class Task { public Status Status { get; set; } public virtual void Close() { Status = Status.Closed; } } public ProjectTask : Task { public override void Close() { if (Status == Status.Started) throw new Exception("Cannot close a started Project Task"); base.Close(); } } Now when calling Close() on a Task, there is a chance the call will fail if it is a ProjectTask with the started status, when it wouldn't if it was a base Task. But this is the business requirements. It should fail. Can this be regarded as a violation?

    Read the article

  • Quartz Thread Execution Parallel or Sequential?

    - by vikas
    We have a quartz based scheduler application which runs about 1000 jobs per minute which are evenly distributed across seconds of each minute i.e. about 16-17 jobs per second. Ideally, these 16-17 jobs should fire at same time, however our first statement, which simply logs the time of execution, of execute method of the job is being called very late. e.g. let us assume we have 1000 jobs scheduled per minute from 05:00 to 05:04. So, ideally the job which is scheduled at 05:03:50 should have logged the first statement of the execute method at 05:03:50, however, it is doing it at about 05:06:38. I have tracked down the time taken by the scheduled job which comes around 15-20 milliseconds. This scheduled job is fast enough because we just send a message on an ActiveMQ queue. We have specified the number of threads of quartz to be 100 and even tried with increasing it to 200 and more, but no gain. One more thing we noticed is that logs from scheduler are coming sequential after first 1 minute i.e. [Quartz_Worker_28] <Some log statement> .. .. [Quartz_Worker_29] <Some log statement> .. .. [Quartz_Worker_30] <Some log statement> .. .. So it suggesting that after some time quartz is running threads almost sequential. May be this is happening due to the time taken in notifying the job completion to persistence store (which is a separate postgres database in this case) and/or context switching. What can be the reason behind this strange behavior? EDIT: More detailed Log [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] fired job [<job_name>] scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute begin--------- ScheduledLocateJob with key: <job_name> started at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute end--------- ScheduledLocateJob with key: <job_name> ended at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] completed firing job [<job_name>] with resulting trigger instruction code: DO NOTHING. Next scheduled at: 06-07-2012 10:34:53.000 I am doubting on this section of the above log scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 because this job was scheduled for 10:04:53, but it fired at 10:08:33 and still quartz didn't consider it as misfire. Shouldn't it be a misfire?

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Is it possible to use some form of code for example in PUTTY to execute a task which is done Remote

    - by xnxmx
    Basically, Every morning at 6:00AM I have to do login to remote desktop, open a program, and click on a few things to make reservations before anyone else does. I want to know if there is any other way that this can be done by simply turning it into some form of a code and executing it instead of manually doing it. Of course, time is precious here and the task needs to be done at the same pace if not faster. Thanks!!!

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >