Search Results

Search found 5152 results on 207 pages for 'scheduled tasks'.

Page 53/207 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • how to maintain a Presistent state of an app in Android?

    - by androidbase Praveen
    hi all, i am working on my App. in between i pressed the Home button on the device. my app will go to the background tasks. After that i long press the home button it will show my app as a persistent state. i.e where and what i had done in my app. But i click my app in the directory window it restarts my app. i want to do if my app is in the background tasks it will wake up else it will start. how to achieve that? Any Idea?

    Read the article

  • How to calculate a RTOS task's time

    - by Adnan
    Hello all, I have written a code in c for Arm7 using RTOS. There are multiple tasks who's priority is set to same level. So the tasks executes on round-robin base. There is an exception that one task (Default) has set to lower priority then the other task in rtos. So that if no task is running, the default or lower priority task runs. Now i want to calculate the exact total timing (time duration) for that default task runs. Can any one give some idea what to do .... and how to do in code.. Regards Dani

    Read the article

  • best practices - multiple functions vs single function with switch case

    - by Amit
    I have a situation where I need to perform several small (but similar) tasks. I can think of two ways to achieve this. First Approach: function doTask1(); function doTask2(); function doTask3(); function doTask4(); Second Approach: // TASK1, TASK2, ... TASK4 are all constants function doTask(TASK) { switch(TASK) { case TASK1: // do task1 break; case TASK2: // do task2 break; case TASK3: // do task3 break; case TASK4: // do task4 break; } } A few more tasks may be added in future (though the chances are rare. but this cannot be ruled out) Please suggest which of the two approaches (or if any other) is a best practice in such a situation.

    Read the article

  • Spring - scheduling and pooling runnables of different state (each Runnable instance has different

    - by lisak
    Hi, I can't figure out what to use for scheduling and pooling runnables of different state (each Runnable instance has different state). I could use ScheduledExecutorFactoryBean together with MethodInvokingRunnable to supply arguments. But take a look at the key ScheduledExecutorFactoryBean method, it is designed in a way that all task should start at the beginning. protected void registerTasks(ScheduledExecutorTask[] tasks, ScheduledExecutorService executor) { for (ScheduledExecutorTask task : tasks) { Runnable runnable = getRunnableToSchedule(task); if (task.isOneTimeTask()) { executor.schedule(runnable, task.getDelay(), task.getTimeUnit()); } else { if (task.isFixedRate()) { executor.scheduleAtFixedRate(runnable, task.getDelay(), task.getPeriod(), task.getTimeUnit()); } else { executor.scheduleWithFixedDelay(runnable, task.getDelay(), task.getPeriod(), task.getTimeUnit()); } } } } I can't think of how to setup this scenario with ThreadPoolTaskScheduler. Please help me out here. Thank you EDIT: shortened version is: how to setup task scheduler that would run hundreds of "different instances of Threads (with different state) with 2 seconds interval.

    Read the article

  • Tools for website development

    - by sharkin
    Hi! I can't find any other post which addresses this topic on a purely general level. I'm very unexperienced in developing full scale websites. I get around using HTML and PHP for simple tasks. Now I'm looking at developing a more elaborate site and want to get hold of a good tool which doesn't eat my wallet to the bones. I guess I should add more intelligent text here about what I expect it to be capable of, but I really don't know so much. I guess the code/design/preview style in dreamweaver appealed to me, and I'm definetely going to use scripting like PHP or ASP.NET, some database work etc. I guess pretty much the standard (?) website development tasks. So, are there good tools out there I should be looking at?

    Read the article

  • Windows - VBScript - Determine IP address of computer on network

    - by tward
    I have written some VBScripts to automate tasks that I perform on computers over the network. These work great for most tasks however within our network we have problems with the IP address in DNS being correct all the time. This mainly occurs with laptops where we have different IP ranges for machines on the wireless and wired network. For example a machine may boot up wired in the morning and get an IP address: 10.10.10.1 When it switches to wireless it will obtain an address in a different subnet: 10.11.10.1 When you try to connect to that machine it still returns the old IP address (10.10.10.1) even though the computer now has a new one. I have found that I can still connect to that computer's C$ share via \computer name\c$ even though the machine does not ping. Obviously there is some other kind of address resolution going on, my question is how do I harness this to allow my VBScripts connect to WMI? Thanks!

    Read the article

  • how to maintain a Home Button Presistent state of an app in Android?

    - by androidbase Praveen
    hi all, i am working on my App. in between i pressed the Home button on the device. my app will go to the background tasks. After that i long press the home button it will show my app as a persistent state. i.e where and what i had done in my app. But i click my app in the directory window it restarts my app. i want to do if my app is in the background tasks it will wake up else it will start. how to achieve that? Any Idea?

    Read the article

  • How do I create SEO-Friendly urls in ASP.Net-MVC

    - by blesh
    I'm getting myself acquainted with ASP.Net-MVC, and I was trying to accomplish some common tasks I've accomplished in the past with webforms and other functionality. On of the most common tasks I need to do is create SEO-friendly urls, which in the past has meant doing some url rewriting to build the querystring into the directory path. for example: www.somesite.com/productid/1234/widget rather than: www.somesite.com?productid=1234&name=widget What method do I use to accomplish this in ASP.Net-MVC? I've search around, and all I've found is this, which either I'm not understanding properly, or doesn't really answer my question: http://stackoverflow.com/questions/734583/seo-urls-with-asp-net-mvc

    Read the article

  • Sunspot_Rails Rake Aborted When Running Reindex

    - by Walter Chen
    I'm getting the following output after running rake sunspot:reindex. I'm a novice at Rails. I want to add sunspot_rails to my project to add search functionality (with the hope of deploying the project with Heroku). I'm using Rails 3. I followed the instructions here: https://github.com/outoftime/sunspot/blob/master/sunspot_rails/README.rdoc. My various other attempts to diagnose the problem included: installing sunspot in addition to sunspot_rails. I ended up with sunspot_rails v. 1.2.0 and 1.2.1 so I uninstalled 1.2.1 because I have sunspot_rails 1.2.0. installed the nokogiri gem which I understand is a dependency for sunspot_rails. installed libxml2 separately following the instructions here to install nokogiri: http://www.engineyard.com/blog/2010/getting-started-with-nokogiri/ ** Invoke sunspot:reindex (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute sunspot:reindex rake aborted! undefined local variable or method counter' for [removed pound]<Class:0x10359aef8> /Library/Ruby/Gems/1.8/gems/activerecord-3.0.3/lib/active_record/base.rb:1008:inmethod_missing' /Library/Ruby/Gems/1.8/gems/sunspot_rails-1.2.0/lib/sunspot/rails/searchable.rb:235:in solr_index' /Library/Ruby/Gems/1.8/gems/activerecord-3.0.3/lib/active_record/relation/batches.rb:71:infind_in_batches' /Library/Ruby/Gems/1.8/gems/activerecord-3.0.3/lib/active_record/base.rb:440:in __send__' /Library/Ruby/Gems/1.8/gems/activerecord-3.0.3/lib/active_record/base.rb:440:infind_in_batches' /Library/Ruby/Gems/1.8/gems/sunspot_rails-1.2.0/lib/sunspot/rails/searchable.rb:234:in solr_index' /Library/Ruby/Gems/1.8/gems/sunspot_rails-1.2.0/lib/sunspot/rails/searchable.rb:184:insolr_reindex' /Library/Ruby/Gems/1.8/gems/sunspot_rails-1.2.0/lib/sunspot/rails/tasks.rb:57 /Library/Ruby/Gems/1.8/gems/sunspot_rails-1.2.0/lib/sunspot/rails/tasks.rb:56:in each' /Library/Ruby/Gems/1.8/gems/sunspot_rails-1.2.0/lib/sunspot/rails/tasks.rb:56 /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:incall' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in execute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:ineach' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in execute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:ininvoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in synchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:ininvoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in invoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:ininvoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:ineach' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in top_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:inrun' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:inrun' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 This is what I have in class I'd like to search: searchable do text :fname text :mname text :lname, :default_boost => 2 end Any help would be greatly appreciated!

    Read the article

  • Using Parallel Extensions In Web Applications

    - by Greg
    I'd like to hear some opinions as to what role, if any, parallel computing approaches, including the potential use of the parallel extensions (June CTP for example), have a in web applications. What scenarios does this approach fit and/or not fit for? My understanding of how exactly IIS and web browsers thread tasks is fairly limited. I would appreciate some insight on that if someone out there has a good understanding. I'm more curious to know if the way that IIS and web browsers work limits the ROI of creating threaded and/or asynchronous tasks in web applications in general. Thanks in advance.

    Read the article

  • source control when starting up a new project

    - by jaras
    when starting with a project and using source control i find it hard to separate the things people are working on so they don't either write duplicate code or think it should be named one thing and so on. this problem diminishes over time because the general foundation is in place and it's easier to separate the tasks so they don't overlap as much how do you manage working with source control in the beginning phase? EDIT: I can see that it don't really have anything to do with source control, but it gets more apparent when you have source control too. so the question becomes more along the lines of "how do you manage to separate the tasks so they don't overlap too much. I think it's really hard and i haven't really seen much about how to do it.

    Read the article

  • Where would you document standardized complex data that is passed between many objects and methods?

    - by Eli
    Hi All, I often find myself with fairly complex data that represents something that my objects will be working on. For example, in a task-list app, several objects might work with an array of tasks, each of which has attributes, temporal expressions, sub tasks and sub sub tasks, etc. One object will collect data from web forms, standardize it into a format consumable by the class that will save them to the database, another object will pull them from the database, put them in the standard format and pass them to the display object, or the update object, etc. The data itself can become a fairly complex series of arrays and sub arrays, representing a 'task' or list of tasks. For example, the below might be one entry in a task list, in the format that is consumable by the various objects that will work on it. Normally, I just document this in a file somewhere with an example. However, I am thinking about the best way to add it to something like PHPDoc, or another standard doc system. Where would you document your consumable data formats that are for many or all of the objects / methods in your app? Array ( [Meta] => Array ( //etc. ) [Sched] => Array ( [SchedID] => 32 [OwnerID] => 2 [StatusID] => 1 [DateFirstTask] => 2011-02-28 [DateLastTask] => [MarginMonths] => 3 ) [TemporalExpressions] => Array ( [0] => Array ( [type] => dw [TemporalExpID] => 3 [ord] => 2 [day] => 6 [month] => 4 ) [1] => Array ( [type] => dm [TemporalExpID] => 32 [day] => 28 [month] => 2 ) ) [Task] => Array ( [SchedTaskID] => 32 [SchedID] => 32 [OwnerID] => 2 [UserID] => 5 [ClientID] => 9 [Title] => Close Prior Year [Body] => [DueTime] => ) [SubTasks] => Array ( [101] => Array ( [SchedSubTaskID] => 101 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Profit and Loss by Class [Body] => [DueDiff] => 0 ) [102] => Array ( [SchedSubTaskID] => 102 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Balance Sheet [Body] => [DueDiff] => 0 ) [103] => Array ( [SchedSubTaskID] => 103 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Current Year for Prior Year Expenses to Accrue [Body] => Look at Journal Entries that are templates as well. [DueDiff] => 0 ) [104] => Array ( [SchedSubTaskID] => 104 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Review Prior Year Membership from 11/1 - 12/31 to Accrue to Current Year [Body] => [DueDiff] => 0 ) [105] => Array ( [SchedSubTaskID] => 105 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Enter Vacation Accrual [Body] => [DueDiff] => 0 ) [106] => Array ( [SchedSubTaskID] => 106 [ParentST] => 105 [RootT] => 32 [UserID] => 2 [Title] => Email Peter requesting Vacation Status of Employees at Year End [Body] => We need Employee Name, Rate and Days of Vacation left to use. We also need to know if the employee used any of the prior year's vacation. [DueDiff] => 43 ) [107] => Array ( [SchedSubTaskID] => 107 [ParentST] => [RootT] => 32 [UserID] => 2 [Title] => Grants Receivable at Year End [Body] => [DueDiff] => 0 ) [108] => Array ( [SchedSubTaskID] => 108 [ParentST] => 107 [RootT] => 32 [UserID] => 2 [Title] => Email Peter Requesting if there were and Grants Receivable at year end [Body] => [DueDiff] => 43 ) ) )

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Learning SQL White hat Hacking

    - by user301751
    Well here goes a sligtly arwkward question, I have changed job roles from Network Admin to SQL Server DBA thus having to learn SQL server 2005. I am quite self motivated and have learned the basics of Transac and a little about Reporting services. The only thing is I need to set senarios as theres not much coming in at work in the way of SQL tasks. I have always kept my interest in networking by setting little "Hacking tasks", I have has a look at some crackme's but can find nothing to play with. I understand the SQL injection is some sort of SQL hack but found not much on the subject. I know my way of learning might be a bit different from others but it is all White Hat and keeps my interest. Thanks

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • Return database messages on successful SQL execution when using ADO

    - by peacedog
    I'm working on a legacy VB6 app here at work, and it has been a long time since I've looked at VB6 or ADO. One thing the app does is to executes SQL Tasks and then to output the success/failure into an XML file. If there is an error it inserts the text the task node. What I have been asked to do is try and do the same with the other mundane messages that result from succesfully executed tasks, like (323 row(s) affected). There is no command object being used, it's just an ADODB.Connection object. Here is the gist of the code: Dim sqlStatement As String Set sqlStatement = /* sql for task */ Dim sqlConn As ADODB.Connection Set sqlConn = /* connection magic */ sqlConn.Execute sqlStatement, , adExecuteNoRecords What is the best way for me to capture the non-error messages so I can output them? Or is it even possible?

    Read the article

  • Are there C# controls that can be used to create a hierarchical list of prioritised items?

    - by Mendokusai
    I need to be able to display and edit a hierarchical list of tasks in a C# app. It can either be a Windows form app, or ASP.NET. Basically, I want similar behaviour to the way Microsoft Project handles tasks. The control would need to: 1) Maintain a list of items made up of several fields 2) Each item can have a number of children (at least 3 levels of nesting) 3) It needs to be very easy to change the parents/children of an item 4) It needs to be very easy to edit the fields (as fast as changing cells in Excel) 5) It needs to be very easy to reorder the items by dragging and dropping or cut and paste 6) If I can easily connect the control to a database, even better Before I go and create something manually, I'm wondering if there is something available already?

    Read the article

  • Keep a reference to an NSThread around and message its objects?

    - by RickiG
    Hi I am a bit uncertain on how to do this: I start a "worker-thread" that runs for the duration of my apps "life". [NSThread detachNewThreadSelector:@selector(updateModel) toTarget:self withObject:nil]; then - (void) updateModel { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; BackgroundUpdate *update = [[BackgroundUpdate alloc] initWithTimerInterval:5]; [[NSRunLoop currentRunLoop] run]; //keeps it going 'forever' [update release]; [pool release]; } Now the thread "wakes" up every 5 seconds(initWithTimerInterval) to see if there are any tasks it can do. All the tasks in the BackGroundUpdate Class are only time dependent for now. I would like to have a few that were "event dependent". e.g. I would like to call the Background Object from my main thread and tell it to "speedUp", "slowDown", "reset" or any method on the object. To do this I guess I need something like performSelectorOnThread but how to get a reference to the NSthread and the Background Object?

    Read the article

  • How can I make Ruby rake display the full backtrace on uncaught exception

    - by Martinos
    As you may know rake swallows the full backtrace on uncaught exception. If I want to have a full backtrace, I need to add the --trace option. I find this very annoying because some of my tasks take long time to run (up to 6 hours), when it crashes I don't have any debugging info. I need to run it again with the --trace. On top of that, the system may not be in the same state as when the error occurred, so it may not show afterward. I always have to add --trace on any tasks. This displasy stuff that I don't want to see when the task is executed. Is there a way to change this default behaviour? (which I don't think is useful at all)

    Read the article

  • What is the difference between these task definition syntaxes in gradle?

    - by bergyman
    A) task build << { description = "Build task." ant.echo('build') } B) task build { description = "Build task." ant.echo('build') } I notice that with type B, the code within the task seems to be executed when typing gradle -t - ant echoes out 'build' even when just listing all the various available tasks. The description is also actually displayed with type B. However, with type A no code is executed when listing out the available tasks, and the description is not displayed when executing gradle -t. The docs don't seem to go into the difference between these two syntaxes (that I've found), only that you can define a task either way.

    Read the article

  • An Actor "queue" ?

    - by synic
    In Java, to write a library that makes requests to a server, I usually implement some sort of dispatcher (not unlike the one found here in the Twitter4J library: http://github.com/yusuke/twitter4j/blob/master/twitter4j-core/src/main/java/twitter4j/internal/async/DispatcherImpl.java) to limit the number of connections, to perform asynchronous tasks, etc. The idea is that N number of threads are created. A "Task" is queued and all threads are notified, and one of the threads, when it's ready, will pop an item from the queue, do the work, and then return to a waiting state. If all the threads are busy working on a Task, then the Task is just queued, and the next available thread will take it. This keeps the max number of connections to N, and allows at most N Tasks to be operating at the same time. I'm wondering what kind of system I can create with Actors that will accomplish the same thing? Is there a way to have N number of Actors, and when a new message is ready, pass it off to an Actor to handle it - and if all Actors are busy, just queue the message?

    Read the article

  • How to update assmebly version info with new build and revision during build?

    - by hrushikesh
    I have to update the build number in assembly version of assemblyinfo.cs file. I have written a custom tasks which updates all the assmeblyinfo.cs under a solution before starting build. But when i change this file and try to build then some of my dlls which has reference of other dlls not able to compile as they dont find the specific version assembly. I have some files which uses strong name assembly also. Not sure how to update their version. I have tried setting specific version to false,but still same error is coming. Can you anybody tell me the good way to update the assemblyinfo.cs with incremental build number? P.S. i am using NANT tasks for automating by builds.

    Read the article

  • Exception while trying to deserialize JSON into EntityFramework using JavaScriptSerializer

    - by Barak
    I'm trying to deserialize JSON which I'm getting from an external source into an Entity Framework entity class using the following code: var serializer = new JavaScriptSerializer(); IList<Feature> obj = serializer.Deserialize<IList<Feature>>(json); The following exception is thrown: Object of type 'System.Collections.Generic.List1[JustTime.Task]' cannot be converted to type 'System.Data.Objects.DataClasses.EntityCollection1[JustTime.Task]'. My model is simple: The Feature class has a one-to-many relation to the Tasks class. The problem appears to be the deserializer is trying to create a generic List to hold the collection of tasks instead of an EntityCollection. I've tried implementing a JavaScriptConverted which would handle System.Collections.Generic.List but it didn't get called by the deserializer.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >