Search Results

Search found 5152 results on 207 pages for 'scheduled tasks'.

Page 28/207 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Can AutoIt scripts run as a scheduled task while not logged in?

    - by muddywater
    I am using Ruby/WATIR/AutoIt to automate a task via Task Scheduler which runs fine as long as I am logged in, but as soon as my account is locked (mandated 10 minute screen lock) or I logout the script stops functioning. When I log back in, it is sitting at the part where AutoIt is supposed to handle a file download dialogue by clicking save and then entering the filename and clicking to save again. The follwing is the code that works while I am logged in. Is AutoIt supposed to work when I am not logged in, and is there some other way to accomplish this? Thanks!! because search has not turned up anything (terms are too generic) prompt_message = "Do you want to save this file, or find a program online to open it?" window_title = "File Download" save_dialog = WIN32OLE.new("AutoItX3.Control") sleep 1 save_dialog_obtained = save_dialog.WinWaitActive(window_title,prompt_message, 25) save_dialog.ControlFocus(window_title, prompt_message, "&Save") sleep 1 save_dialog.Send("S") save_dialog.ControlClick(window_title, prompt_message, "&Save") save_dialog.WinSetTitle(window_title, prompt_message, "This is ForTesting" ) saveas_dialog_obtained = save_dialog.WinWait("Save As", "Save&in", 5) sleep 1 path = fileName puts " Edit the file path" save_dialog.ControlSend("Save As", "", "Edit1",path) sleep 4 puts " Save the file" save_dialog.ControlClick("Save As", "Save &in", "&Save") save_fileAlreadyExists = save_dialog.Send("Y")

    Read the article

  • Why does this VBS scheduled task (to call a URL) not work in Windows Server 2008?

    - by user303644
    This same script worked in older server OS environments, and even on my desktop; and allows me to kick off a nightly process on my website's URL. It simply will not execute the URL in my Windows Server 2008 environment. It does not generate any errors, claiming task completion I can pull the same URL up just fine in the server's web browser I have the script running with "highest privileges" I even tried to create a batch file which executes it, so I can explicitly "Run as Administrator" and it still will not execute the URL (but will not generate any errors either). I'm baffled as to why the task claims to have completed successfully, yet the script never reaches the URL. Call LogEntry() Sub LogEntry() 'Force the script to finish on an error. On Error Resume Next 'Declare variables Dim objRequest Dim URL Set objRequest = CreateObject("MSXML2.ServerXMLHTTP") 'Put together the URL link appending the Variables. URL = "http://myURL/AutorunNightlyTasks.aspx" 'Open the HTTP request and pass the URL to the objRequest object objRequest.open "GET", URL, False 'Send the HTML Request objRequest.send() 'Set the object to nothing Set objRequest = Nothing End Sub

    Read the article

  • How do you assign resources and keep begin, end and duration of a task intact?

    - by Random
    I have problems with assigning more than one resource to a group of tasks. The idea is simple, my tasks are in one group and are manually scheduled to particular begin and end dates. I want to assign more than one resource to keep task duration and dates (fixed duration) and increase work. For top level tasks it works fine but as long tasks are grouped, the duration of each is extended to reach group end date and work remains. For the problematic tasks, the Gantt chart looks like this: One resource attached (good) ( Task 1.1 ) ( Task 1.2 ) (Task 1.3) More than one resource attached (wrong) ( Task 1.1 )....................... ( Task 1.2 ).......... (Task 1.3) So for tasks like that, I want to have a fixed schedule and just increase work by adding resources that work in the same time, but sometimes MS Project does leveling to do resources work sequentialy.

    Read the article

  • Is It possible to change dynamically delay on a Scheduled Poller in Camel via JMX?

    - by sebbrousse
    I would like to set/change the delay of a File consumer at runtime through JMX. I am able to change the value of the property but it doesn't seem to be taken into account until I restart the consumer. Example with the camel-archetype-java and its basic file example: Run It Change the delay of the File Consumer by calling the setDelay Operation with the JConsole Delay property of the Consumer is changed but logs show it continues to poll at 500ms by default Stop/Start the consumer New value of delay is used by the consumer Do I need anothers steps or active any configuration to make it work at runtime?

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • How to manage a developer who has poor communication skills

    - by djcredo
    I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing & managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex beaurocracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitute for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversly, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like.

    Read the article

  • Dojo and Separate JavaScript File

    - by Bunch
    For a project I needed to use the ArcGIS API for some mapping. To use this you need to use Dojo but in this case all it really comes down to is adding some require lines and a addOnLoad on your web page. At first everything was working great, the maps rendered and the various layers would populate as needed. Once it was working I started moving the various javascript functions into their own files to keep everything nice and neat. Then the problems started, mainly the map would not show up any more. So that was a pretty big problem. Luckily the fix was pretty simple, just move the dojo.addOnLoad line into it’s own script tag. If I had the dojo.addOnLoad in the same script block as the various require lines it would not work as expected. Works: <script type="text/javascript" language="javascript" src="javascript/test.js" />     <script type="text/javascript">       dojo.require("esri.map");       dojo.require("esri.tasks.locator");       dojo.require("esri.tasks.query");       dojo.require("esri.tasks.geometry");  </script>  <script type="text/javascript">      dojo.addOnLoad(init);  </script> Does not work: <script type="text/javascript" language="javascript" src="javascript/test.js" /> <script type="text/javascript">       dojo.require("esri.map");       dojo.require("esri.tasks.locator");       dojo.require("esri.tasks.query");       dojo.require("esri.tasks.geometry");       dojo.addOnLoad(init); </script> Technorati Tags: JavaScript,Dojo

    Read the article

  • Is Akka a good solution for a concurrent pipeline/workflow problem?

    - by herpylderp
    Disclaimer: I am brand new to Akka and the concept of Actors/Event-Driven Architectures in general. I have to implement a fairly complex problem where users can configure a "concurrent pipeline": Pipeline: consists of 1+ Stages; all Stages execute sequentially Stage: consists of 1+ Tasks; all Tasks execute in parallel Task: essentially a Java Runnable As you can see above, a Task is a Runnable that does some unit of work. Tasks are organized into Stages, which execute their Tasks in parallel. Stages are organized into the Pipeline, which executes its Stages sequentially. Hence if a user specifies the following Pipeline: CrossTheRoadSafelyPipeline Stage 1: Look Left Task 1: Turn your head to the left and look for cars Task 2: Listen for cars Stage 2: Look right Task 1: Turn your head to the right and look for cars Task 2: Listen for cars Then, Stage 1 will execute, and then Stage 2 will execute. However, while each Stage is executing, it's individual Tasks are executing in parallel/at the same time. In reality Pipelines will become very complicated, and with hundreds of Stages, dozens of Tasks per Stage (again, executing at the same time). To implement this Pipeline I can only think of several solutions: ESB/Apache Camel Guava Event Bus Java 5 Concurrency Actors/Akka Camel doesn't seem right because its core competency is integration not synchrony and orchestration across worker threads. Guava is great, but this doesn't really feel like a subscriber/publisher-type of problem. And Java 5 Concurrency (ExecutorService, etc.) just feels too low-level and painful. So I ask: is Akka a strong candidate for this type of problem? If so, how? If not, then why, and what is a good candidate?

    Read the article

  • SQL Server "User-Schema Separation" and Entity Framework issues

    - by Ryan
    I have been fooling around with EF with a database that has implemented user-schema separation with a twist, there are multiple tables with the same name but are separated via the schema. So like: admin.tasks staff.tasks contractor.tasks When I created my EF model I noticed that there were 3 tasks tables: tasks tasks1 tasks2 Is this by design? Also is there a way to tell EF to add the schema to the name of the entity or am I SOL and doing it myself?

    Read the article

  • Task-It Webinar - Source Code

    Last week I presented a webinar called "Building a real-world application with RadControls for Silverlight 4". For those that didn't get to see the webinar, you can view it here: Building a read-world application with RadControls for Silverlight 4 Since the webinar I've received several requests asking if I could post the source code for the simple application I showed demonstrating some of the techniques used in the development of Task-It, such as MVVM, Commands and Internationalization. This source code is now available for downloadhere. After downloading the source: Extract it to the location of your choice on your hard-drive Open the solution Right-click ModuleProject.Web and selecte 'Set as StartUp Project'. Right-click ProjectTestPage.aspx and selected 'Set as Start Page' Create a database in SQL Server called WebinarProject. Navigate to the Database folder under the WebinarProject directory and run the .sql script against your WebinarProject database. The last two steps are necessary only for the Tasks page to work properly (using WCF RIA Services). Now some notes about each page: Code-behind This is not the way I recommend coding a line-of-business application in Silverlight, but simply wanted to show how the code-behind approach would look. Command This page introduces MVVM and Commands. You'll notice in the XAML that the Command property of theRadMenuItem and the Button are both bound to a SaveCommand. That comes from the view model. If you look in the code- behind of the user control you'll see that an instance of a CommandViewModel is instantiated and set as the DataContext of the UserControl.There is also a listener for the view model's SaveCompleted event. When this is fired, it tells the view (UserControl) to display the MessageBox. Internationalization This sample is similar to the previous one, but instead of using hard-coded strings in the UI, the strings are obtained via binding toview model properties. The view model gets the strings from the .resx files (Strings.resx or Strings.de.resx) under Assets/Resources. If you uncomment the call to ShowGerman() in App.xaml.cs's Application_Startup method and re-run the application, you will see the UI in German. Note that this code, which sets the CurrentCulture and CurrentUICulture on the current thread to "de" (German) is for testing purposes only. RadWindow Once again, very similar to the previous example.The difference is that we are now using a RadWindow to display the 'Saved' message instead of a MessageBox. The advantage here is that we do not have to hold on to a reference to the view model in our code behind so that we can get the 'Saved' message from it. The RadWindow's DataContext is now also bound to the view model, so within its XAML we can bind directly to properties in the view model. Much nicer, and cleaner. One other thing I introduced in this example is the use of spacer Rectangles. Rather than setting a width and/or height on the rectangles for spacing, I am now referencing a style in my ResourceDictionary called StandardSpacerStyle. I like doing this better than using margins or padding because now I have a reusable way to create space between elements, the Rectangle does not show (because I have not set its Fill color), and I can change my spacing throughout the user interface in one place if I'd like. Tasks This page is quite a bit different than the other four. It is a very simple, stripped-down version of the Tasks page in the Task-It application. The Tasks.xaml UserControl has a ContentControl, and the Content of that control is set based on whether we are looking at the list of tasks or editing a task. So it displays one of two child UserControls, which are called List and Details. List has the RadGridView, Details has the form. In the code-behind of the Tasks UserControl I am once again setting its DataContext to a view model class. The nice thing is, whichever child UserControl is being displayed (List or Details) inherits its DataContext from its parent control (Tasks), so I do not have to explicitly set it. The List UserControl simply displays a RadGridView whose ItemsSource is bound to a property in the view model called Tasks, and its SelectedItem property is bound to a property in the view model called SelectedItem. The SelectedItem binding must be TwoWay so that the view is notified when the SelectedItem changes in the view model, and the view model is notified when something changes in the view (like when a user changes the Name and/or DueDate in the form). You'll also notice that the form's TextBox and RadDatePicker are also TwoWay bound to the SelectedItem property in the view model. You can experiment with the binding by removing TwoWay and see how changes in the form do not show up in the RadGridView. So here we have an example of two different views (List and Details) that are both bound to the same view model...and actually, so is the Tasks UserControl, so it is really three views. WCF RIA Services By the way, I am using WCF RIA Services to retrieve data for the RadGridView and save the data when the user clicks the Save button in the form. I created a really simple ADO.NET Entity Data Model in WebinarProject.Web called DataModel.edmx. I also created a simple Domain Data Service called DataService that has methods for retrieving data, inserting, updating and deleting. However I am only using the retrieval and update methods in this sample. Note that I do not currently have any validation in place on the form, as I wanted to keep the sample as simple as possible. Wrap up Technically, I should move the calls to WCF RIA Services out of the view model and put them into a separate layer, but this works for now, and that is a topic for another day! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Concurrency Utilities for Java EE Early Draft (JSR 236)

    - by arungupta
    Concurrency Utilities for Java EE is being worked as JSR 236 and has released an Early Draft. It provides concurrency capabilities to Java EE application components without compromising container integrity. Simple (common) and advanced concurrency patterns are easily supported without sacrificing usability. Using Java SE concurrency utilities such as java.util.concurrent API, java.lang.Thread and java.util.Timer in a Java EE application component such as EJB or Servlet are problematic since the container and server have no knowledge of these resources. JSR 236 enables concurrency largely by extending the Concurrency Utilities API developed under JSR-166. This also allows a consistency between Java SE and Java EE concurrency programming model. There are four main programming interfaces available: ManagedExecutorService ManagedScheduledExecutorService ContextService ManagedThreadFactory ManagedExecutorService is a managed version of java.util.concurrent.ExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/BatchExecutor")ManagedExecutorService executor; Its recommended to bind the JNDI references in the java:comp/env/concurrent subcontext. The asynchronous tasks that need to be executed need to implement java.lang.Runnable or java.util.concurrent.Callable interface as: public class MyTask implements Runnable { public void run() { // business logic goes here }} OR public class MyTask2 implements Callable<Date> {  public Date call() { // business logic goes here   }} The task is then submitted to the executor using one of the submit method that return a Future instance. The Future represents the result of the task and can also be used to check if the task is complete or wait for its completion. Future<String> future = executor.submit(new MyTask(), String.class);. . .String result = future.get(); Another example to submit tasks is: class MyTask implements Callback<Long> { . . . }class MyTask2 implements Callback<Date> { . . . }ArrayList<Callable> tasks = new ArrayList<();tasks.add(new MyTask());tasks.add(new MyTask2());List<Future<Object>> result = executor.invokeAll(tasks); The ManagedExecutorService may be configured for different properties such as: Hung Task Threshold: Time in milliseconds that a task can execute before it is considered hung Pool Info Core Size: Number of threads to keep alive Maximum Size: Maximum number of threads allowed in the pool Keep Alive: Time to allow threads to remain idle when # of threads > Core Size Work Queue Capacity: # of tasks that can be stored in inbound buffer Thread Use: Application intend to run short vs long-running tasks, accordingly pooled or daemon threads are picked ManagedScheduledExecutorService adds delay and periodic task running capabilities to ManagedExecutorService. The implementations of this interface are provided by the container and accessible using JNDI reference: <resource-env-ref>  <resource-env-ref-name>    concurrent/BatchExecutor  </resource-env-ref-name>  <resource-env-ref-type>    javax.enterprise.concurrent.ManagedExecutorService  </resource-env-ref-type><resource-env-ref> and available as: @Resource(name="concurrent/timedExecutor")ManagedExecutorService executor; And then the tasks are submitted using submit, invokeXXX or scheduleXXX methods. ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS); This will create and execute a one-shot action that becomes enabled after 5 seconds of delay. More control is possible using one of the newly added methods: MyTaskListener implements ManagedTaskListener {  public void taskStarting(...) { . . . }  public void taskSubmitted(...) { . . . }  public void taskDone(...) { . . . }  public void taskAborted(...) { . . . } }ScheduledFuture<?> future = executor.schedule(new MyTask(), 5, TimeUnit.SECONDS, new MyTaskListener()); Here, ManagedTaskListener is used to monitor the state of a task's future. ManagedThreadFactory provides a method for creating threads for execution in a managed environment. A simple usage is: @Resource(name="concurrent/myThreadFactory")ManagedThreadFactory factory;. . .Thread thread = factory.newThread(new Runnable() { . . . }); concurrent/myThreadFactory is a JNDI resource. There is lot of interesting content in the Early Draft, download it, and read yourself. The implementation will be made available soon and also be integrated in GlassFish 4 as well. Some references for further exploring ... Javadoc Early Draft Specification concurrency-ee-spec.java.net [email protected]

    Read the article

  • Dynamic State Machine in Ruby? Do State Machines Have to be Classes?

    - by viatropos
    Question is, are state machines always defined statically (on classes)? Or is there a way for me to have it so each instance of the class with has it's own set of states? I'm checking out Stonepath for implementing a Task Engine. I don't really see the distinction between "states" and "tasks" in there, so I'm thinking I could just map a Task directly to a state. This would allow me to be able to define task-lists (or workflows) dynamically, without having to do things like: aasm_event :evaluate do transitions :to => :in_evaluation, :from => :pending end aasm_event :accept do transitions :to => :accepted, :from => :pending end aasm_event :reject do transitions :to => :rejected, :from => :pending end Instead, a WorkItem (the main workflow/task manager model), would just have many tasks. Then the tasks would work like states, so I could do something like this: aasm_initial_state :initial tasks.each do |task| aasm_state task.name.to_sym end previous_state = nil tasks.each do |tasks| aasm_event task.name.to_sym do transitions :to => "#{task.name}_phase".to_sym, :from => previous_state ? "#{task.name}_phase" : "initial" end previous_state = state end However, I can't do that with the aasm gem because those methods (aasm_state and aasm_event) are class methods, so every instance of the class with that state machine has the same states. I want it so a "WorkItem" or "TaskList" dynmically creates a sequence of states and transitions based on the tasks it has. This would allow me to dynamically define workflows and just have states map to tasks. Are state machines ever used like this?

    Read the article

  • creating objects in list

    - by prince23
    List myList = new List { new Users{ Name="Kumar", Gender="M", Age=75, Parent="All"}, new Users{ Name="Suresh",Gender="M", Age=50, Parent="Kumar"}, new Users{ Name="Bennette", Gender="F",Age=45, Parent="Kumar"}, new Users{ Name="kian", Gender="M",Age=20, Parent="Suresh"}, new Users{ Name="Nathani", Gender="M",Age=15, Parent="Suresh"}, new Users{ Name="Peter", Gender="M",Age=90, Parent="All"}, new Users{ Name="Mica", Age=56, Gender="M",Parent="Peter"}, new Users{ Name="Linderman", Gender="M",Age=51, Parent="Peter"}, new Users{ Name="john", Age=25, Gender="M",Parent="Mica"}, new Users{ Name="tom", Gender="M",Age=21, Parent="Mica"}, new Users{ Name="Ando", Age=64, Gender="M",Parent="All"}, new Users{ Name="Maya", Age=24, Gender="M",Parent="Ando"}, new Users{ Name="Niki Sanders", Gender="F",Age=2, Parent="Maya"}, new Users{ Name="Angela Patrelli", Gender="F",Age=3, Parent="Maya"}, }; now i need to format the data like here i need to check the parent based on the parent i will be creating objects under the parent objects now { Name="Kumar", Gender="M", Age=75, Parent="All" } this is the top level as a property Parent ="all" now under parent object kumar here we again create a new object to store these information under object(kumar who is the parent) new Users{ Name="Suresh",Gender="M", Age=50, Parent="Kumar"}, new Users{ Name="Bennette", Gender="F",Age=45, Parent="Kumar"}, what i need to do here is i need to check the parent based on the parent create further objects under it ex: i need to achive like this. looking for the syntax how i can do it. public class SampleProjectData { public static ObservableCollection<Product> GetSampleData() { DateTime dtS = DateTime.Now; ObservableCollection<Product> teams = new ObservableCollection<Product>(); teams.Add(new Product() { PDName = "Product1", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3), }); Project emp = new Project() { PName = "Project1", OverallStartTime = dtS + TimeSpan.FromDays(1), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS, EndTime = dtS + TimeSpan.FromDays(2), TaskName = "John's Task 3" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(3), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "John's Task 2" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project2", OverallStartTime = dtS + TimeSpan.FromDays(1.5), OverallEndTime = dtS + TimeSpan.FromDays(5.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Victor's Task" }); teams[0].Projects.Add(emp); emp = new Project() { PName = "Project3", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Jason's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(7), EndTime = dtS + TimeSpan.FromDays(9), TaskName = "Jason's Task 2" }); teams[0].Projects.Add(emp); teams.Add(new Product() { PDName = "Product2", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project4", OverallStartTime = dtS + TimeSpan.FromDays(0.5), OverallEndTime = dtS + TimeSpan.FromDays(3.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Vicky's Task" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project5", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2.2), EndTime = dtS + TimeSpan.FromDays(3.8), TaskName = "Oleg's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(6), TaskName = "Oleg's Task 2" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(8), EndTime = dtS + TimeSpan.FromDays(9.6), TaskName = "Oleg's Task 3" }); teams[1].Projects.Add(emp); emp = new Project() { PName = "Project6", OverallStartTime = dtS + TimeSpan.FromDays(2.5), OverallEndTime = dtS + TimeSpan.FromDays(4.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(0.8), EndTime = dtS + TimeSpan.FromDays(2), TaskName = "Kim's Task" }); teams[1].Projects.Add(emp); teams.Add(new Product() { PDName = "Product3", OverallStartTime = dtS, OverallEndTime = dtS + TimeSpan.FromDays(3) }); emp = new Project() { PName = "Project7", OverallStartTime = dtS + TimeSpan.FromDays(5), OverallEndTime = dtS + TimeSpan.FromDays(7.5) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.5), EndTime = dtS + TimeSpan.FromDays(4), TaskName = "Balaji's Task 1" }); emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(5), EndTime = dtS + TimeSpan.FromDays(8), TaskName = "Balaji's Task 2" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project8", OverallStartTime = dtS + TimeSpan.FromDays(3), OverallEndTime = dtS + TimeSpan.FromDays(6.3) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(1.75), EndTime = dtS + TimeSpan.FromDays(2.25), TaskName = "Li's Task" }); teams[2].Projects.Add(emp); emp = new Project() { PName = "Project9", OverallStartTime = dtS + TimeSpan.FromDays(2), OverallEndTime = dtS + TimeSpan.FromDays(6) }; emp.Tasks.Add(new Task() { StartTime = dtS + TimeSpan.FromDays(2), EndTime = dtS + TimeSpan.FromDays(3), TaskName = "Stacy's Task" }); teams[2].Projects.Add(emp); return teams; } } above is an sample data where i am grouping them with static data in the same way i need **to for teh data which is cmg from DB and i need to store them list** all these three data are comig from different services. and i am storing them in a list now i have three tables data Product , Project, Task. all the data are coming from webservies. i have created three list where i am storing the data in list. List<Project>objpro= new List<Project>(); List<Product>objproduct= new List<Product>(); LIst<Task>objTask= new List<Task>(); **now what i need to do is i need to do the mapping between these tables. if you see above. i have object of Product under Product i have added object of Project and then under project object i have added task object.** now from the above data which is stored in the list i need to do the same mapping between class. and group the data. public class Product : INotifyPropertyChanged { public Product() { this.Projects = new ObservableCollection<Project>(); } public string PDName { get; set; } public ObservableCollection<Project> Projects { get; set; } private DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } private DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Project : INotifyPropertyChanged { public Project() { this.Tasks = new ObservableCollection<Task>(); } public string PName { get; set; } public ObservableCollection<Task> Tasks { get; set; } DateTime _st; public DateTime OverallStartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("OverallStartTime"); this.OverallEndTime = value + dur; } } } DateTime _et; public DateTime OverallEndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("OverallEndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } public class Task : INotifyPropertyChanged { public string TaskName { get; set; } DateTime _st; public DateTime StartTime { get { return _st; } set { if (this._st != value) { TimeSpan dur = this._et - this._st; this._st = value; this.OnPropertyChanged("StartTime"); this.EndTime = value + dur; } } } private DateTime _et; public DateTime EndTime { get { return _et; } set { if (this._et != value) { this._et = value; this.OnPropertyChanged("EndTime"); } } } #region INotifyPropertyChanged Members protected void OnPropertyChanged(string name) { if (this.PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(name)); } public event PropertyChangedEventHandler PropertyChanged; #endregion } hope my question is clear

    Read the article

  • cpusets not working - threads aren't running in the cpuset I specified?

    - by lori
    I have used cpuset to shield some cpus for exclusive use by some realtime threads. Displaying the cpuset config with the test app RealtimeTest1 running and its tasks moved into the cpusets: $ cset set --list -r cset: Name CPUs-X MEMs-X Tasks Subs Path ------------ ---------- - ------- - ----- ---- ---------- root 0-23 y 0-1 y 279 2 / system 0,2,4,6,8,10 n 0 n 202 0 /system shield 1,3,5,7,9,11 n 1 n 0 2 /shield RealtimeTest1 1,3,5,7 n 1 n 0 4 /shield/RealtimeTest1 thread1 3 n 1 n 1 0 /shield/RealtimeTest1/thread1 thread2 5 n 1 n 1 0 /shield/RealtimeTest1/thread2 main 1 n 1 n 1 0 /shield/RealtimeTest1/main I can interrogate the cpuset filesystem to show that my tasks are supposedly pinned to the cpus I requested: /cpusets/shield/RealtimeTest1 $ for i in `find -name tasks`; do echo $i; cat $i; echo "------------"; done ./thread1/tasks 17651 ------------ ./main/tasks 17649 ------------ ./thread2/tasks 17654 ------------ Further, if I use sched_getaffinity, it reports what cpuset does - that thread1 is on cpu 3 and thread2 is on cpu 5. However, if I run top -p 17649 -H with f,j to bring up the last used cpu, it shows that thread 1 is running on thread 2's cpu, and main thread is running on a cpu in the system cpuset (Note that thread 17654 is running FIFO, hence thread 17651 is blocked) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND 17654 root -2 0 54080 35m 7064 R 100 0.4 5:00.77 3 RealtimeTest 17649 root 20 0 54080 35m 7064 S 0 0.4 0:00.05 2 RealtimeTest 17651 root 20 0 54080 35m 7064 R 0 0.4 0:00.00 3 RealtimeTest Also, looking at /proc/17649/task to find the last_cpu each of its tasks ran on: /proc/17649/task $ for i in `ls -1`; do cat $i/stat | awk '{print $1 " is on " $(NF - 5)}'; done 17649 is on 2 17651 is on 3 17654 is on 3 So cpuset and sched_getaffinity reports one thing, but reality is another I would say that cpuset is not working? My machine configuration is: $ cat /etc/SuSE-release SUSE Linux Enterprise Server 11 (x86_64) VERSION = 11 PATCHLEVEL = 1 $ uname -a Linux foobar 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • How do I run a ruby script, that I put in my /lib/tasks/ directory in my Rails app, once?

    - by marcamillion
    Eventually I would like to get to setting it up as a Rake task and do a cron job, but for right now...all I want to do is take my ruby script that used to work as a standalone script and have it work within my Rails app. I renamed the file to be .rake instead of .rb and tried doing rake my_script at the command-line, but that gave me this error message: rake aborted! Don't know how to build task 'my_script' (See full trace by running task with --trace) How do I run this script within my Rails environment? This is the first time I am doing something like this, so any assistance would be greatly appreciated. Thanks.

    Read the article

  • How to know if all the Thread Pool's thread are already done with its tasks?

    - by mcxiand
    I have this application that will recurse all folders in a given directory and look for PDF. If a PDF file is found, the application will count its pages using ITextSharp. I did this by using a thread to recursively scan all the folders for pdf, then if then PDF is found, this will be queued to the thread pool. The code looks like this: //spawn a thread to handle the processing of pdf on each folder. var th = new Thread(() => { pdfDirectories = Directory.GetDirectories(pdfPath); processDir(pdfDirectories); }); th.Start(); private void processDir(string[] dirs) { foreach (var dir in dirs) { pdfFiles = Directory.GetFiles(dir, "*.pdf"); processFiles(pdfFiles); string[] newdir = Directory.GetDirectories(dir); processDir(newdir); } } private void processFiles(string[] files) { foreach (var pdf in files) { ThreadPoolHelper.QueueUserWorkItem( new { path = pdf }, (data) => { processPDF(data.path); } ); } } My problem is, how do i know that the thread pool's thread has finished processing all the queued items so i can tell the user that the application is done with its intended task?

    Read the article

  • How much do politics/office intrigue interfere with your day to day tasks at work?

    - by Michael Dorgan
    I'm currently blessed to be employed at a location where politics are pretty much non-existant and management overhead is nearly nil. As I've only worked at this one location for my entire, lengthy career, I have very little frame of reference outside of an occasional Dilbert comic or offhand comment from others about just how bad office politics and management interference get in the way of getting your code done elsewhere. While I'm not actively looking for a new job, this one point has made me quite reluctant to even look seriously elsewhere. My question is, just how much are politics a way of life in larger companies - in or out of the game industry and how much does it affect your day to day satisfaction?

    Read the article

  • NSOperations or NSThread for bursts of smaller tasks that continuously cancel each other?

    - by RickiG
    Hi I would like to see if I can make a "search as you type" implementation, against a web service, that is optimized enough for it to run on an iPhone. The idea is that the user starts typing a word; "Foo", after each new letter I wait XXX ms. to see if they type another letter, if they don't, I call the web service using the word as a parameter. The web service call and the subsequent parsing of the result I would like to move to a different thread. I have written a simple SearchWebService class, it has only one public method: - (void) searchFor:(NSString*) str; This method tests if a search is already in progress (the user has had a XXX ms. delay in their typing) and subsequently stops that search and starts a new one. When a result is ready a delegate method is called: - (NSArray*) resultsReady; I can't figure out how to get this functionality 'threaded'. If I keep spawning new threads each time a user has a XXX ms. delay in the typing I end up in a bad spot with many threads, especially because I don't need any other search, but the last one. Instead of spawning threads continuously, I have tried keeping one thread running in the background all the time by: - (void) keepRunning { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; SearchWebService *searchObj = [[SearchWebService alloc] init]; [[NSRunLoop currentRunLoop] run]; //keeps it alive [searchObj release]; [pool release]; } But I can't figure out how to access the "searchFor" method in the "searchObj" object, so the above code works and keeps running. I just can't message the searchObj or retrieve the resultReady objects? Hope someone could point me in the right direction, threading is giving me grief:) Thank you.

    Read the article

  • What tasks aren't easy for PHP, ColdFusion and ASP?

    - by boost
    PHP, ColdFusion, and ASP (among many others) are usually sold on their strengths. What are their weaknesses? If one were to develop a niche product to handle the things that these products weren't so good at, what should it focus on? EDIT I'm trying to figure out what things PHP etc are bad at. They're all good at doing the nuts and bolts stuff, if one is looking with a bottom-to-top mindset. I'm thinking a little more globally, more top-to-bottom; what's difficult to achieve in PHP/ASP/CF without thousands of lines of code and twenty minutes of server time? EDIT Suppose company A comes up to you and says, "We want you to do x in PHP." What values of x will cause you to say, "Forget it, buddy, no one in their right mind would use PHP for that"? (swap PHP in the above quote for your favourite tool) EDIT Have we got to the point where everyone's needs can be met with PHP frameworks, Rails and ... er ... Java?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >