Search Results

Search found 21998 results on 880 pages for 'custom msbuild task'.

Page 224/880 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • Drupal vs ExpressionEngine for any kind of project from simple commercial site to complex ecommerce

    - by artmania
    Hi friends... So far I've been using custom cms. lately I developed own cms with CodeIgniter, and I'm actually happy. But recently I take more design and front-end development works than deep development projects. I actually also prefer so... I have many things to do with custom cms, also some security issues, etc. I'm kind of tired of doing everyhing custom, also I want to give more time to my family... Recently I'm seriously considering to go for a ready cms, and develop custom plugins when project need sth specific. This cms should be very flexible to implement any layout. also secured (since i had some hack problems with my custom cms!) I googled so much about this. As a result 2 options: Drupal Expression Engine opensource or licensed matter is not an issue for me at all. I just consider to go for a cms that I can use for any kind of project from simple 4-5 pages company sites to complicated projects like hotels directory, ecommerce portals, etc... As I found out; EE is more userfriendly and doesnt hassle about implementing custom layout as much as Drupal does. Also EE use CodeIgniter that I'm familiar. on the other hand I found out that Drupal is 10000% flexible, we can do anything with that (requires good php knowledge), extremely powerful and has many plugins... So I can't decide!! I want to go for a cms that I will use for looooong years from now on with no problems to implement any kind of project. So which one do you recommend? Appreciate your helps! thanks a lot... Edited: http://expressionengine.com/ee2_sneak_preview/#cost this Commercial License $299.95 is for 1 setup? So I need to purchase new licence for each project? Nothing like I pay once, and use the cms for as many project as I want?

    Read the article

  • Two Linked CSS Files Not working

    - by Oscar Godson
    <head> <title>Water Bureau</title> <link rel="shortcut icon" href="/favicon.ico" /> <link rel="stylesheet" href="/css/main.css" type="text/css" media="screen" title="Main CSS"> <link rel="stylesheet" href="custom.css" type="text/css" media="screen" title="Custom Styles"> <script language="JavaScript" src="/shared/js/createWin.js" type="text/javascript"></script> </head> Thats the code, but the "custom.css" isn't working. It doesn't work at all. If we add a @import into main.css OR into the header instead of a it works fine though. Any ideas? We also got rid of the media="screens" on both as well. The CSS inside of it is working fine, it's just when we stack those two, the first one parsed is the only one read by the browser. So if we swap main below custom, custom than works but NONE of main works. and custom just has snippet of CSS, and doesn't override all the CSS in main, just 1 element. I can't figure it out! We have other ed stylesheets in the head (which we took out for trying to fix this) and they worked fine...

    Read the article

  • WCF Message Debugging on WebHttpBehavior

    - by Programming Hero
    I've created a custom binding in WCF for a custom MessageEncoder to allow messages to be written as XML using a wider range of encodings than WCF supports out of the box. The encoder appears to be working and I am able to send and receive messages, but I want to verify that the XML message being written is exactly as required by the service I am trying to consume. I've turned on message logging for WCF using the diagnostic trace listeners to output the messages sent and received over the wire to a log file. Unfortunately, for calls using my encoder, the message is displayed as ... stream ... EDIT: I don't think it's anything to do with my custom encoding. I have experimented with my custom binding a little, switching to using the built-in text encoding and http transport. I still don't get a message body logged in the message trace. EDIT2: Having done further investigation, the issue looks to be related not to the custom binding, but the custom behaviour. I'm apply the <webHttp/> behaviour. Once this is specified (along with manual addressing), the tracing behaviour shows up. Is this a known issue with WebHttpBehavior? Am I still barking up the wrong tree?

    Read the article

  • Android - need UI help/advice

    - by Donal Rafferty
    I have been working on Android for the past couple of months getting to know how various components work. One area I am completely lacking in knowledge is any sort of User Interface or graphical interface creation. As an excercise I have been asked to break down the HTC call screen into what components it contains and rebuild as close as possible. Here is a picture of the HTC call screen: From my understanding the above UI has a custom title bar where "Meteor" and the call time appears. Then the main image in the middle block along with a text view showing the called party, in this case "Voice Mail" and the number. The bottom is then a custom view maybe with three custom buttons used within it. Would I be correct in my above assumptions? So the parts I should look into start programming are a custom title bar and a custom view with three custom buttons to place at the bottom? What layout would be reccomended? I hope this question is seen as relative to Stack Overflow, if it is not then I will delete it. Thanks in advance

    Read the article

  • Downloading jQuery UI: Ok, so what part of this mess do I copy to the server?

    - by Martha
    From the "should be simple, but..." files: Trying to get started with jQuery UI. Went to the site, used their custom builder thingy to assemble the parts I need, made myself a custom theme using the Theme Roller, downloaded the zip file thus produced, unzipped it on my local drive. Ok, so I have 37 folders, 311 files, and a total of 2.4 MB. Ain't no way in hell all this is going on the server. What parts do I need to put there? 'css' 'custom-theme': jquery-ui-1.8.custom.css, 'images' subfolder with 12 .png images 'development-bundle' 'demos': demos.css, index.html, plus 18 subfolders, but I'm guessing "not needed" 'docs': 17 .html files, but again, I'm guessing "not needed" 'external': 4 .js files, one .css 'themes': 'base' and 'custom-theme' subfolders, each with 8 or 9 .css files and an 'images' subfolder with about a dozen images 'ui': 25 .js files, an 'i18n' subfolder with 53 .js files, and a 'minified' subfolder with 24 .js files 'js': jquery-1.4.2.min.js and jquery-ui-1.8.custom.min.js Also, the file structure. Our server is set up something like this: root admin (administrative tools) css forms (the gist of the site lives here) images include (asp code snippets that are used by multiple pages) js (just a few things right now, like an ancient wheezing spelling checker) As far as I can tell, the jQuery css files assume that (1) each theme is in its own folder, and (2) each folder has its own images subfolder. How can I convince it otherwise? i.e. put the necessary .js files in the 'js' folder, the .css files in the 'css' folder, and the images in the 'images' folder?

    Read the article

  • Core Data @sum aggregate

    - by nasim
    I am getting an exception when I try to get @sum on a column in iPhone Core-Data application. My two models are following - Task model: @interface Task : NSManagedObject { } @property (nonatomic, retain) NSString * taskName; @property (nonatomic, retain) NSSet* completion; @end @interface Task (CoreDataGeneratedAccessors) - (void)addCompletionObject:(NSManagedObject *)value; - (void)removeCompletionObject:(NSManagedObject *)value; - (void)addCompletion:(NSSet *)value; - (void)removeCompletion:(NSSet *)value; @end Completion model: @interface Completion : NSManagedObject { } @property (nonatomic, retain) NSNumber * percentage; @property (nonatomic, retain) NSDate * time; @property (nonatomic, retain) Task * task; @end And here is the fetch: NSFetchRequest *request = [[NSFetchRequest alloc] init]; request.entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:context]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"taskName" ascending:YES]; request.sortDescriptors = [NSArray arrayWithObject:sortDescriptor]; NSError *error; NSArray *results = [context executeFetchRequest:request error:&error]; NSArray *parents = [results valueForKeyPath:@"taskName"]; NSArray *children = [results valueForKeyPath:@"[email protected]"]; NSLog(@"%@ %@", parents, children); [request release]; [sortDescriptor release]; The exception is thrown at the fourth line from bottom. The thrown exception is: *** -[NSCFSet decimalValue]: unrecognized selector sent to instance 0x3b25a30 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFSet decimalValue]: unrecognized selector sent to instance 0x3b25a30' I would very much appreciate any kind of help. Thanks.

    Read the article

  • Java problem with multiple threads when executing a runnable jar file

    - by Spi1988
    I have developed a Java Swing application, which uses the SwingWorker class to perform some long running tasks. When the application is run from the IDE (Netbeans), I can start multiple long running tasks simultaneously without any problem. I created a runnable jar file for the application, in order to be able to run it from outside the IDE. The application when run from this jar file works well with the only exception that it doesn't allow me to start 2 long running tasks simultaneously. When I start the first task (assume it takes 2 minitues to complete), every thing works fine, the UI does not freeze (it never freezes). However, when I try to run another task (assume it takes just 10 seconds, therefore it should finish before the first task) while the first task has not yet completed, nothing seems to happen. In reality, the second task would have started, and also finished its processing, however its results are only displayed once the first task completes. I dunno why this is happening. Is there some restriction on the number of threads which could run simultaneously on the JVM? Are there any jvm arguments which i could try to solve this problem. I hope i explained my problem well. Thanks in advance, Peter Bartolo

    Read the article

  • .Net Remoting: Serialize Object and implementation

    - by flogo
    Hi, In my scenario there is a client-side assembly that contains a class (Task). This class implements an interface (ITask) that is known on the server. I'm trying to send a Task object from client to server without copying the assembly of the client manually to the server. If I just serialize the task object, the server obviously complains about the missing assembly. I then tried to serialze typeof(Task).Assembly but could not derserialize it on the server. Next I tried to File.ReadAllBytes(typeof(Task).Assembly.Location) and saved it to a temporary file on the server, which threw an exception on Assembly.LoadFrom(@".\temporary.dll"); Why am I doing this? Java RMI has a neat feature to request the implementation of an object that is received through remoting but is stil "unkown" (this JVM doesn't have the *.class file). This can be used for a compute server that just knows the interface of a "task" containing a run() method and downloads the implementation of this method on demand. This way the server doesn't have to be changed for new tasks. I'm trying to achieve something like this in .Net.

    Read the article

  • IMAP protocol support in different email servers

    - by raticulin
    Having to interact with several different email servers via IMAP (using javamail), I have found that there is a very different level of support for IMAP features among them. The lack of support of some features has resulted in more developing time, more complicated code to deal with different support, worse perforamance due to not being able to SEARCH etc. So I would like to get some info on other servers and what level of support they provide. So far I have dealt with Lotus Domino and Novell GroupWise (and to a lesser extend Exchange 2003 and 2007). I am particularly interested in most used one in unix/linux (Courier, Cyrus, Dovecot, UW IMAP) and also Zimbra, but feel free to add any you know. Also welcomed info about online services like gmail. Features that I consider (comment if you are interested in others and I'll add them. Custom flags Searching Custom flags Searching arbitrary headers Partial fetching Proxy authentication And what I have found so far (correct if I am wrong anywhere): Lotus Domino Custom flags yes Searching Custom flags yes Searching arbitrary headers yes Partial fetching ? Proxy authentication sort of, you can give some user permissions to access other users mailboxes and he will see them under his '\Other Users' folder Novell GroupWise Custom flags No Searching Custom flags No Searching arbitrary headers No Partial fetching ? Proxy authentication yes, you can use what is called a Trusted Application

    Read the article

  • How do you deserialize a collection with child collections?

    - by Stuart Helwig
    I have a collection of custom entity objects one property of which is an ArrayList of byte arrays. The custom entity is serializable and the collection property is marked with the following attributes: [XmlArray("Images"), XmlArrayItem("Image",typeof(byte[]))] So I serialize a collection of these custom entities and pass them to a web service, as a string. The web service receives the string and byte array in tact, The following code then attempts to deserialize the collection - back into custom entities for processing... XmlSerializer ser = new XmlSerializer(typeof(List<myCustomEntity>)); StringReader reader = new StringReader(xmlStringPassedToWS); List<myCustomEntity> entities = (List<myCustomEntity>)ser.Deserialize(reader); foreach (myCustomEntity e in entities) { // ...do some stuff... foreach (myChildCollection c in entities.ChildCollection { // .. do some more stuff.... } } I've checked the XML resulting from the initial serialization and it does contain byte array - the child collection, as does the StringReader built above. After the deserialization process, the resulting collection of custom entites is fine, except that each object in the collection does not contain any items in its child collection. (i.e. it doesn't get to "...do some more stuff..." above. Can someone please explain what I am doing wrong? Is it possible to serialize ArrayLists within a generic collection of custom entities?

    Read the article

  • In threads, WaitForMultipleObjects never returns if set to INFINITE

    - by AKN
    Let say I have three thread handles HandleList[0] = hThread1; HandleList[1] = hThread2; HandleList[2] = hThread3; /*All the above are of type HANDLE*/ Before closing the application, I want the thread to get its task done. So I want to make app wait till thread completes. So I do, WaitForMultipleObjects(3, HandleList, TRUE, INFINITE ); By this I'm able to make the thread, complete its task. But control never move to next line after the call to WaitForMultileObjects irrespective of all thread completing its task. If I use, some seconds instead of INFINITE, it comes to next line after that many secs, irrspective of whether thread completes its task or not. WaitForMultipleObjects(3, HandleList, TRUE, 10000 ); My problem here is, I'm can't go for seconds, as I may not be sure whether the threads will complete its task with the given time. To list my problem in simple words, I want all my thread to finish the task, before I close my app. How can I achieve it using WaitForMultipleObjects API?

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • where to store temporary data in MVC 2.0 project

    - by StuffHappens
    Hello! I'm starting to learn MVC 2.0 and I'm trying to create a site with a quiz: user is asked a question and given several options of answer. If he chooses the right answer he gets some points, if he doesn't, he looses them. I tried to do this the following way public class HomeController : Controller { private ITaskGenerator taskGenerator = new TaskGenerator(); private string correctAnswer; public ActionResult Index() { var task = taskGenerator .GenerateTask(); ViewData["Task"] = task.Task; ViewData["Options"] = task.Options; correctAnswer= task.CorrectAnswer; return View(); } public ActionResult Answer(string id) { if (id == correctAnswer) return View("Correct") return View("Incorrect"); } } But I have a problem: when user answers the cotroller class is recreated and I loose correct answer. So what is the best place to store correct answer? Should I create a static class for this purpose? Thanks for your help!

    Read the article

  • Celery / Django Single Tasks being run multiple times

    - by felix001
    I'm facing an issue where I'm placing a task into the queue and it is being run multiple times. From the celery logs I can see that the same worker is running the task ... [2014-06-06 15:12:20,731: INFO/MainProcess] Received task: input.tasks.add_queue [2014-06-06 15:12:20,750: INFO/Worker-2] starting runner.. [2014-06-06 15:12:20,759: INFO/Worker-2] collection started [2014-06-06 15:13:32,828: INFO/Worker-2] collection complete [2014-06-06 15:13:32,836: INFO/Worker-2] generation of steps complete [2014-06-06 15:13:32,836: INFO/Worker-2] update created [2014-06-06 15:13:33,655: INFO/Worker-2] email sent [2014-06-06 15:13:33,656: INFO/Worker-2] update created [2014-06-06 15:13:34,420: INFO/Worker-2] email sent [2014-06-06 15:13:34,421: INFO/Worker-2] FINISH - Success However when I view the actual logs of the application it is showing 5-6 log lines for each step (??). Im using Django 1.6 with RabbitMQ. The method for placing into the queue is via placing a delay on a function. This function (task decorator is added( then calls a class which is run. Has anyone any idea on the best way to troubleshoot this ? Edit : As requested heres the code, views.py In my view im sending my data to the queue via ... from input.tasks import add_queue_project add_queue_project.delay(data) tasks.py from celery.decorators import task @task() def add_queue_project(data): """ run project """ logger = logging_setup(app="project") logger.info("starting project runner..") f = project_runner(data) f.main() class project_runner(): """ main project runner """ def __init__(self,data): self.data = data self.logger = logging_setup(app="project") def self.main(self): .... Code settings.py THIRD_PARTY_APPS = ( 'south', # Database migration helpers: 'crispy_forms', # Form layouts 'rest_framework', 'djcelery', ) import djcelery djcelery.setup_loader() BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 # default RabbitMQ listening port BROKER_USER = "test" BROKER_PASSWORD = "test" BROKER_VHOST = "test" CELERY_BACKEND = "amqp" # telling Celery to report the results back to RabbitMQ CELERY_RESULT_DBURI = "" CELERY_IMPORTS = ("input.tasks", ) celeryd The line im running is to start celery, python2.7 manage.py celeryd -l info Thanks,

    Read the article

  • Which way is preferred when doing asynchronous WCF calls?

    - by Mikael Svenson
    When invoking a WCF service asynchronous there seems to be two ways it can be done. 1. public void One() { WcfClient client = new WcfClient(); client.BegindoSearch("input", ResultOne, null); } private void ResultOne(IAsyncResult ar) { WcfClient client = new WcfClient(); string data = client.EnddoSearch(ar); } 2. public void Two() { WcfClient client = new WcfClient(); client.doSearchCompleted += TwoCompleted; client.doSearchAsync("input"); } void TwoCompleted(object sender, doSearchCompletedEventArgs e) { string data = e.Result; } And with the new Task<T> class we have an easy third way by wrapping the synchronous operation in a task. 3. public void Three() { WcfClient client = new WcfClient(); var task = Task<string>.Factory.StartNew(() => client.doSearch("input")); string data = task.Result; } They all give you the ability to execute other code while you wait for the result, but I think Task<T> gives better control on what you execute before or after the result is retrieved. Are there any advantages or disadvantages to using one over the other? Or scenarios where one way of doing it is more preferable?

    Read the article

  • Identifying a class which is extending an abstract class

    - by Simon A. Eugster
    Good Evening, I'm doing a major refactoring of http://wiki2xhtml.sourceforge.net/ to finally get better overview and maintainability. (I started the project when I decided to start programming, so – you get it, right? ;)) At the moment I wonder how to solve the problem I'll describe now: Every file will be put through several parsers (like one for links, one for tables, one for images, etc.): public class WikiLinks extends WikiTask { ... } public class WikiTables extends WikiTask { ... } The files will then be parsed about this way: public void parse() { if (!parse) return; WikiTask task = new WikiLinks(); do { task.parse(this); } while ((task = task.nextTask()) != null); } Sometimes I may want to use no parser at all (for files that only need to be copied), or only a chosen one (e.g. for testing purposes). So before running task.parse() I need to check whether this certain parser is actually necessary/desired. (Perhaps via Blacklist or Whitelist.) What would you suggest for comparing? An ID for each WikiTask (how to do?)? Comparing the task Object itself against a new instance of a WikiTask (overhead)?

    Read the article

  • Adding to the DOM via Prototype works in Chrome but not Firefox?

    - by zaczap
    I've been working on some Javascript code to add rows to a table dynamically (a small task management system) and it works perfectly in Chrome but not in Firefox. Code in question: var task = new Element('tr', {id:arg}); task.innerHTML = "<td class='notes'>asd</td><td class='check'>*</td>"; //task.innerHTML = "<td class='notes'>&nbsp;</td><td class='check'><input type='checkbox' onclick=\"javascript:complete('"+task.id+"')\" /></td><td class='description'>asd</td><td class='start'>&nbsp;</td><td class='due'></td>"; $('tasks').insert(task); // the commented line above is what the code was originally that does work in chrome When I look at the HTML model in the Firefox debugger, all that is added is: <tr id="arg"><td>asd*</td></tr> Figuring that Chrome might be better at interpreting innerHTML into DOM elements than Firefox, I changed the code to make td elements and add them to my tr element but that didn't improve the situation at all.

    Read the article

  • Exclude notes based on attribute wildcard in XSL node selection

    - by C A
    Using cruisecontrol for continuous integration, I have some annoyances with Weblogic Ant tasks and how they think that server debug information are warnings rather than debug, so are shown in my build report emails. The XML output from cruise is similar to: <cruisecontrol> <build> <target name="compile-xxx"> <task name="xxx" /> </target> <target name="xxx.weblogic"> <task name="wldeploy"> <message priority="warn">Message which isn't really a warning"</message> </task> </target> </build> </cruisecontrol> In the cruisecontrol XSL template the current selection for the task list is: <xsl:variable name="tasklist" select="/cruisecontrol/build//target/task"/> What I would like is something which selects the tasklist in the same way, but doesn't include any target nodes which have the attribute name="*weblogic" where * is a wildcard. I have tried <xsl:variable name="tasklist" select="/cruisecontrol/build//target[@name!='*weblogic']/task"/> but this doesn't seem to have worked. I'm not an expert with XSLT, and just want to get this fixed so I can carry on the real development of the project. Any help is much appreciated.

    Read the article

  • Exclude nodes based on attribute wildcard in XSL node selection

    - by C A
    Using cruisecontrol for continuous integration, I have some annoyances with Weblogic Ant tasks and how they think that server debug information are warnings rather than debug, so are shown in my build report emails. The XML output from cruise is similar to: <cruisecontrol> <build> <target name="compile-xxx"> <task name="xxx" /> </target> <target name="xxx.weblogic"> <task name="wldeploy"> <message priority="warn">Message which isn't really a warning"</message> </task> </target> </build> </cruisecontrol> In the cruisecontrol XSL template the current selection for the task list is: <xsl:variable name="tasklist" select="/cruisecontrol/build//target/task"/> What I would like is something which selects the tasklist in the same way, but doesn't include any target nodes which have the attribute name="*weblogic" where * is a wildcard. I have tried <xsl:variable name="tasklist" select="/cruisecontrol/build//target[@name!='*weblogic']/task"/> but this doesn't seem to have worked. I'm not an expert with XSLT, and just want to get this fixed so I can carry on the real development of the project. Any help is much appreciated.

    Read the article

  • Logic in the db for maintaining a points system relationship?

    - by MarcusBooster
    I'm making a little web based game and need to determine where to put logic that checks the integrity of some underlying data in the sql database. Each user keeps track of points assigned to him, and points are awarded by various tasks. I keep a record of each task transaction to make sure they're not repeated, and to keep track of the value of the task at the time of completion, since an individual award level my fluctuate over time. My schema looks like this so far: create table player ( player_ID serial primary key, player_Points int not null default 0 ); create table task ( task_ID serial primary key, task_PointsAwarded int not null ); create table task_list ( player_ID int references player(player_ID), task_ID int references task(task_ID), when_completed timestamp default current_timestamp, point_value int not null, --not fk because task value may change later constraint pk_player_task_id primary key (player_ID, task_ID) ); So, the player.player_Points should be the total of all his cumulative task points in the task_list. Now where do I put the logic to enforce this? Should I do away with player.player_Points altogether and do queries every time I want to know the total score? Which seems wasteful since I'll be doing that query a lot over the course of a game. Or, put a trigger in the task_list that automatically updates the player.player_Points? Is that too much logic to have in the database and should just maintain this relationship in the application? Thanks.

    Read the article

  • Write out to text file using T-SQL

    - by sasfrog
    I am creating a basic data transfer task using TSQL where I am retrieving certain records from one database that are more recent than a given datetime value, and loading them into another database. This will happen periodically throughout the day. It's such a small task that SSIS seems like overkill - I want to just use a scheduled task which runs a .sql file. Where I need guidance is that I need to persist the datetime from the last run of this task, then use this to filter the records next time the task runs. My initial thought is to just store the datetime in a text file, and update (overwrite) it as part of the task each time it runs. I can read the file in without problems using T-SQL, but writing back out has got me stuck. I've seen plenty of examples which make use of a dynamically-built bcp command, which is then executed using xp_cmdshell. Trouble is, security on the server I'm deploying to precludes the use of xp_cmdshell. So, my question is, are there other ways to simply write a datetime value to a file using TSQL, or should I be thinking about a different approach? EDIT: happy to be corrected about SSIS being "overkill"...

    Read the article

  • Loop through Array with conditional output based on key/value pair

    - by Daniel C
    I have an array with the following columns: Task Status I would like to print out a table that shows a list of the Tasks, but not the Status column. Instead, for Tasks where the Status = 0, I want to add a tag <del> to make the completed task be crossed out. Here's my current code: foreach ($row as $key => $val){ if ($key != 'Status') print "<td>$val</td>"; else if ($val == '0') print "<td><del>$val</del></td>"; } This seems to be correct, but when I print it out, it prints all the tasks with the <del> tag. So basically the "else" clause is being run every time. Here is the var_dump($row): array 'Task' => string 'Task A' (length=6) 'Status' => string '3' (length=1) array 'Task' => string 'Task B' (length=6) 'Status' => string '0' (length=1)

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >