Search Results

Search found 2157 results on 87 pages for 'sequential workflow'.

Page 60/87 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Open-source comment engines

    - by Jim Greenleaf
    I'm looking for a solid open-source commenting engine written in PHP. It needs to have workflow/moderation capabilities as well. I've checked into Disqus, and while I like the concept, the site owner's may develop their own login system at a later point, which would have to integrate with the comment system. Also, I'm not sure that they want the comments hosted remotely. Does anyone have any recommendations that I might look into? Alternatively, if you have experience with Disqus, what do you like/dislike about it?

    Read the article

  • Converting a PHP associative array to a JSON associative array

    - by Extrakun
    I am converting a look-up table in PHP which looks like this to JavaScript using json_encode: AbilitiesLookup Object ( [abilities:private] => Array ( [1] => Ability_MeleeAttack Object ( [abilityid:protected] => [range:protected] => 1 [name:protected] => MeleeAttack [ability_identifier:protected] => MeleeAttack [aoe_row:protected] => 1 [aoe_col:protected] => 1 [aoe_shape:protected] => [cooldown:protected] => 0 [focusCost:protected] => 0 [possibleFactions:protected] => 2 [abilityDesc:protected] => Basic Attack ) .....snipped... And in JSON, it is: {"1":{"name":"MeleeAttack","fof":"2","range":"1","aoe":[null,"1","1"],"fp":"0","image":"dummy.jpg"},.... The problem is I get a JS object, not an array, and the identifier is a number. I see 2 ways around this problem - either find a way to access the JSON using a number (which I do not know how) or make it such that json_encode (or some other custom encoding functions) can give a JavaScript associative array. (Yes, I am rather lacking in my JavaScript department). Note: The JSON output doesn't match the array - this is because I do a manual json encoding for each element in the subscript, before pushing it onto an array (with the index as the key), then using json_encode on it. To be clear, the number are not sequential because it's an associative array (which is why the JSON output is not an array).

    Read the article

  • R and SPSS difference

    - by sfactor
    i will be analysing vast amount of network traffic related data shortly. i will pre-process the data in order to analyse it. i have found that R and SPSS are among the most popular tools for statistical analysis. i will also be generating quite a lot of graphs and charts. so i was wondering what is the basic difference between these two softwares. i am not asking which one is better. i just wanted to know what are the difference in terms of workflow between the two besides the fact that SPSS has a GUI. I will be mostly working with scripts in either case anyway so i wanted to know about the other differences.

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • Plugin Framework - can there be too many addin assemblies?

    - by spooner
    Hi, The product I'm working on needs to be built in such a way that we have a quote engine driven by a pluggable framework. We are currently thinking of using MAF, so we can leverage separation of the host and addin interfaces for versioning. However, I'm concerned that we'd have lots of assemblies, it's likely that we'd have one for each quote engine addin - of which there could be 100 going forward, we also need to support multiple versions, so there could be lots of assemblies in total. The quote engine also uses WF to drive it, which means each AppDomain for each addin will need a workflow runtime associated with it. This seems quite heavyweight, however we can unload unfrequently used addins. Does this seem like a good design? We've also looked at a single AppDomain solution using an IOC container to load addin types, but I'm concerned that we won't be able to unload any of the assemblies, given their quantity.

    Read the article

  • Restart IIS7 webapp for debug in VS2008

    - by spender
    I'm developing an ASP.NET asyc generic handler which uses a static System.Threading.Timer instance to feed data to the response output stream while the context is in the asynchronous phase. I'm having issues at startup that I'd like to debug. Currently, because of the long-lived nature of the timer, I'm manually killing w3wp.exe in order to cause the application to restart. I'm aware that there are other ways to do this, but this is the best workflow I can find! Isn't there a way to force a cold start when I hit f5, instead of the standard VS approach of attaching to the existing IIS process?

    Read the article

  • What to name a method

    - by coffeeaddict
    I'm debating what to name this method. CloseCashTransaction(Cash.Id, -1, true); or CompleteCyberCashTransaction(Cash.Id, -1, true); or neither are good? In business terms/process by sending in these 3 values I'm essentially "closing the transaction" or "completing the transaction" in our workflow. However on the developer side, I cant' infer wtf "Complete" or "Close" means. It forces me to look into the internals of the method. My struggle is that I try to name methods to infer what they are doing. Complete is just way too general and forces the consumer of the method to dive into the code every time I use words like this. When I see stuff like this all over code, I have to take so much time to figure out what they are actually doing. And if the comments suck, I end up having to look at all logic in that method because the comments nor the method name really infer what's going on.

    Read the article

  • Is Oracle AQ/Streams of any use in my situation?

    - by RenderIn
    I'm writing a workflow system that is driven entirely at each step by explicit human interaction. That is, a task is assigned to a person, that person selects from a few limited options {approve, reject, forward}, and then it is either sent along to the next person or terminated. Just curious of Oracle Streams/AQ has anything to offer over flat tables managed by regular web application code. The amount of processing after each action is fairly limited and the volume is not terribly high, so there's not really a need to throttle things by throwing them into a queue. What are some of the benefits of introducing a queue structure, or is it overkill for my situation?

    Read the article

  • Does it make sense to commit after every save with a DVCS?

    - by blockhead
    I know the question has been asked before how often to commit with a DVCS. All answers have one thing in common--as often as possible. But they're usually something like, after finishing a thought, a user story, getting code that compiles, or passing tests. I was thinking, given that a DVCS gives you you're own repository, with very cheap commits, doesn't it make sense, to commit after every change to a file? After all, this is what happens in NetBeans, and you get a nice free "time machine" without even asking for it. If not every change, then at least every save, or compile. Does this make sense, or do I have the wrong idea about DVCS. My feeling is that this not the workflow most people have with DVCS.

    Read the article

  • Microsoft CRM could not log you on to the system. Make sure your user record...

    - by Willy
    "Microsoft CRM could not log you on to the system. Make sure your user record is enabled and that you have been assigned at least one security role. For more information, contact your system administrator." When I RDP into the server and try Microsoft CRM Workflow Manager/Monitor with http or https connectivity, it doesn't work. "The specified Microsoft CRM server is not responding. This might happen if it is currently unavaliable, it is not a Microsoft CRM server, or you are not a valid user. Contact your sys-admin." This is a Microsoft CRM v3.0 / Microsoft SQL server 2005 box, Active directory is on a seperate box.. When I right click Microsoft CRM Worlkflow Service, properties, log on: it shows "crmtestuser" and a password. I did not RDP or try logging in as that crmtestuser, but I am Admin... Could this be a clue? What can I try?

    Read the article

  • Unit Testing Error - The unit test adapter failed to connect to the data source or to read the data

    - by michael.lukatchik
    I'm using VSTS 2K8 and I've set up a Unit Test Project. In it, I have a test class with a method that does a simple assertion. I'm using an Excel 2007 spreadsheet as my data source. My test method looks like this: [DataSource("System.Data.Odbc", "Dsn=Excel Files;dbq=|DataDirectory|\\MyTestData.xlsx;defaultdir=C:\\TestData;driverid=1046;maxbuffersize=2048;pagetimeout=5", "Sheet1", DataAccessMethod.Sequential)] [DeploymentItem("MyTestData.xlsx")] [TestMethod()] public void State_Value_Is_Set() { string expected = "MD"; string actual = TestContext.DataRow["State"] as string; Assert.AreEqual(expected, actual); } As indicated in the method decoration attributes, my Excel spreadsheet is on my local C:/ Drive. In it, the sheet where all of my data is located is named "Sheet1". I've copied the Excel spreadsheet into my project and I've set its Build Action = "Content" and I've set its Copy to Output Directory = "Copy if Newer". When trying to run this simple unit test, I receive the following error: The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42S02] [Microsoft][ODBC Excel Driver] The Microsoft Office Access database engine could not find the object 'Sheet1'. Make sure the object exists and that you spell its name and the path name correctly. I've verified that the sheet name is spelled correctly (i.e. Sheet1) and I've verified that my data sources are set correctly. Web searches haven't turned up much at all. And I'm totally stumped. All help or input is appreciated!!!!

    Read the article

  • Is it a good idea to cache data from web services into a database?

    - by Thierry Lam
    Let's assume that Stackoverflow offers web services where you can retrieve all the questions asked by a specific user. A request to get all question from user A can result in the following json output: { { "question": "What is rest?", "date_created": "20/02/2010", "votes": 1, }, { "question": "Which database to use for ...", "date_created": "20/07/2009", "votes": 5, }, } If I want to manipulate and present the data in any ways that I want, will it be wise to dump it in a local database? At some point, I will also want to retrieve all answers for each question and store them in a local database. The workflow that I'm thinking is: User logs in. Web services retrieve all questions asked by the logged in user, dump them in a local database. User wants all answers for a specific question, another web service does the retrieval and dump them in a local database. After user logs out, delete from the local database all questions and answers from that user.

    Read the article

  • Agile and code release

    - by ring bearer
    Do you know of any agile process that is created for code releases? One of the main theme of agile is frequent releases and each company/client would have their own test/approval processes that control code releases. Most of the time these slow down the pace of "frequent releases" Currently we have a proprietary tool based workflow. The team who needs a code promotion needs to create a promotion request to one of the final UAT servers. Once this is complete, and once tests are done, certain customers, technical/non-technical managers need to approve, then it goes in to production deploy stage. Meanwhile no sprint planning meeting or anything of that sort. What is the code release process (Which is agile) that has worked for you?

    Read the article

  • Windows Azure worker roles: One big job or many small jobs?

    - by Ryan Elkins
    Is there any inherent advantage when using multiple workers to process pieces of procedural code versus processing the entire load? In other words, if my workflow looks like this: Get work from queue0 and do A Store result from A in queue1 Get result from queue 1 and do B Store result from B in queue2 Get result from queue2 and do C Is there an inherent advantage to using 3 workers who each do the entire process themselves versus 3 workers that each do a part of the work (Worker 1 does 1 & 2, worker 2 does 3 & 4, worker 3 does 5). If we only care about working being done (finished with step 5) it would seem that it scales the same way (once you're using at least 3 workers). Maybe the big job is better because workers with that setup have less bottleneck issues?

    Read the article

  • Is there a java library / package analogous to <stdio.h>?

    - by Roboprog
    I have been doing Java on and off for about 14 years, and almost nothing else the last 6 years or so. I really hate the java.io package -- its legion of subclasses and adapters. I do like exceptions, rather than having to always poll "errno" and the like, but I could surely live without declared exceptions. Is there anything that functions like the Unix/ANSI stdio.h routines in C? I know we will never be rid of java.io and its conventions until java itself is retired, as they have metastasized throughout the many frameworks that have accreted to java. That said, I would like something that works kind of like this (let's call it package javax.stdio): Have a main utility class, perhaps FileStar, that can read and write files (or pipes), either text or binary, either sequentially or random access, with constructors that mimic fopen() and popen(). This class should have a load of useful methods that do things like fread(), fwrite(), fgets(), fputs(), fseek(), and whatever else (fprintf()?). Methods that are incompatible with the open/construct mode simply throw up (just like some of the collections classes/methods do when restricted). Then, have a bunch of interfaces that suggest how you intend to use the stream once you have created it: Sequential, RandomAccess, ReadOnly, WriteOnly, Text, Binary, plus combinations of these that make sense. Perhaps even have methods to return the appropriate type-cast (interface), throwing up if you have asked for something incompatible. For extra flavor, skip the declared exceptions -- e.g. - javax.stdio.IOException extends RuntimeException. Is there an open source project like this floating around?

    Read the article

  • log4js ConsoleAppender initialization

    - by perrierism
    I'm wondering if anyone happens to have some experience using Log4js? It seems its normal ConsoleAppender isn't always ready to use immediately after it's added to a logger object... If I have two sequential script tags in a document like: //Initialize logger <script type="text/javascript"> var logger = new Log4js.getLogger("JSLOG"); logger.addAppender(new Log4js.ConsoleAppender(logger, false)); logger.setLevel(Log4js.Level.INFO); </script> //Use logger <script type="text/javascript"> logger.info('Test test'); </script> ... It causes the console pop-up (pop-up window) to appear with an error message on page load: 12:58:23 PM WARN Log4js - Could not run the listener function () { return fn.apply(object, arguments); }. TypeError: this.outputElement is null The console is still initialised, it's there afterward, but for just that first logger call it doesn't seem to be there fully. If I make the first logger call setTimeout("logger.info('test test')", 1000), it doesn't have the error. So it's like it's not ready immediately. Anyone see this before or know what a workaround might be? Cheers

    Read the article

  • How to handle "Remember me" in the Asp.Net Membership Provider

    - by RemotecUk
    Ive written a custom membership provider for my ASP.Net website. Im using the default Forms.Authentication redirect where you simply pass true to the method to tell it to "Remember me" for the current user. I presume that this function simply writes a cookie to the local machine containing some login credential of the user. What does ASP.Net put in this cookie? Is it possible if the format of my usernames was known (e.g. sequential numbering) someone could easily copy this cookie and by putting it on their own machine be able to access the site as another user? Additionally I need to be able to inercept the authentication of the user who has the cookie. Since the last time they logged in their account may have been cancelled, they may need to change their password etc so I need the option to intercept the authentication and if everything is still ok allow them to continue or to redirect them to the proper login page. I would be greatful for guidance on both of these two points. I gather for the second I can possibly put something in global.asax to intercept the authentication? Thanks in advance.

    Read the article

  • Optimize CUDA with Thrust in a loop

    - by macs
    Given the following piece of code, generating a kind of code dictionary with CUDA using thrust (C++ template library for CUDA): thrust::device_vector<float> dCodes(codes->begin(), codes->end()); thrust::device_vector<int> dCounts(counts->begin(), counts->end()); thrust::device_vector<int> newCounts(counts->size()); for (int i = 0; i < dCodes.size(); i++) { float code = dCodes[i]; int count = thrust::count(dCodes.begin(), dCodes.end(), code); newCounts[i] = dCounts[i] + count; //Had we already a count in one of the last runs? if (dCounts[i] > 0) { newCounts[i]--; } //Remove thrust::detail::normal_iterator<thrust::device_ptr<float> > newEnd = thrust::remove(dCodes.begin()+i+1, dCodes.end(), code); int dist = thrust::distance(dCodes.begin(), newEnd); dCodes.resize(dist); newCounts.resize(dist); } codes->resize(dCodes.size()); counts->resize(newCounts.size()); thrust::copy(dCodes.begin(), dCodes.end(), codes->begin()); thrust::copy(newCounts.begin(), newCounts.end(), counts->begin()); The problem is, that i've noticed multiple copies of 4 bytes, by using CUDA visual profiler. IMO this is generated by The loop counter i float code, int count and dist Every access to i and the variables noted above This seems to slow down everything (sequential copying of 4 bytes is no fun...). So, how i'm telling thrust, that these variables shall be handled on the device? Or are they already? Using thrust::device_ptr seems not sufficient for me, because i'm not sure whether the for loop around runs on host or on device (which could also be another reason for the slowliness).

    Read the article

  • In search of opinions on web based version control systems

    - by tom smith
    Hi. Researching various open source, web-based document management/version control systems. I've checked google/questions here, etc... I'm looking for a lightweight web-based (apache) document mgmt/version control app that runs on top of SVN. I need to have the ability to: have multiple users checkin/checkout have a workflow (when userA checks the file in, and finishes the app passes it to the next person, etc... the app needs to allow me to have a structure where the files can be moved as a group. the files will be changed on a monthly basis app needs to have a access/premission control system. some people can see certain files, and perform certain actions on the files I imagine that I'm going to have 40-50 people dealing with the different files. I imagine that I'm going to have 2000-3000 files that have to be massaged. I'd prefer that the app be php based if possible, as opposed to a straight java app. Thanks

    Read the article

  • Drupal setup for proofreaders - "Revision Moderation"

    - by Olav
    I would like to have external proofreaders to work directly inside my Drupal site. Basically they should be able to create new revisions, annotate, comment etc without affecting what users see without my approval. Particularly the node might already be public. "Revision Moderation" module sounds a bit like what I want, but it seems not to be so much used, and I run into other modules like "Workflow". What is important for me: Possibility to work on content already published Easy for the proofreader, easy for me to direct her to the right location Other useful features such as comments (Like those balloons in Word), diffs etc (I guess I could work around (1) by copying the content)

    Read the article

  • ASP.Net & 960.gs Integrating the .css files into a new project.

    - by Baxter
    I've got a full layout designed using 960.gs and am wanting to use it in a ASP.net website but i'm getting warnings such as: "Warning File 'css/Reset.css' was not found." "Warning File 'css/Text.css' was not found." "Warning File 'css/960.css' was not found." "Warning File 'css/Base.css' was not found." I've tried adding a new similarly named folder to the solution explorer and 'add existing item' to add them in but it still seems to cause warnings and intelisense isn't recognising any of the 960.gs classes. Is there a recommended workflow for importing all of this in? I should add that it works perfectly well on the local development server and on the webspace, it's seemingly just VS that's complaining.

    Read the article

  • Metatool for automatic xml code generation

    - by iceman
    I want to develop a programming tool for developers which can do automatic xml code generation for specifying a GUI design and its controls. The aim is to allow non-programmers specify GUI controls(which in this case perform higher level task unlike WinForms ) from a GUI. So the xml code generated is essentially an internal representation which programmers can understand and further use in any automatic GUI generator. So the workflow is GUI(non-programmers)-xml(for programmers)-GUI(non-programmers). Is there a Microsoft project similar to this?

    Read the article

  • Are there any tools to help the user to design a State Machine to be consumed by my application?

    - by kolrie
    When reading this question I remembered there was something I have been researching for a while now and I though Stackoverflow could be of help. I have created a framework that handles applications as state machines. Currently all the state business logic and transactions are handled via Java code. I was looking for some UI implementation that would allow the user to draw the state machines and transactions and generate a file that can later on be consumed by my framework to "run" the workflow according to one or more defined state machines. Ideally I would like to use an open standard like SCXML. The goal as the UI would be to have something like this plugin IBM have for Rational Software Architect: Do you know any editor, plugin or library that would have something similar or at least serve as a good starting point?

    Read the article

  • DB Design Question

    - by hazimdikenli
    I am designing an Org Chart, model is almost ready and simplified a bit for clarity here. OrgUnit (OrgUnitId, Name, ReportsToOrgUnitId, ...) OrgUnitJobs (OrgUnitJobId, OrgUnitId, JobName, ReportsToOrgUnitJobId, ... ,IsJobGroup) Employee (EmployeeId, ........) OrgUnitJobEmployee (OrgUnitJobId, EmployeeId, AssignedDate, .....,) so I want to know every OrgUnit's ManagerEmployee (should have one), and Employees can have more than one job, but one of them has to be the main job, so I know whats his manager and other stuff. This is going to support a little workflow behind the scnese, so that is why it is not a very simple Org chart Model. so what would you do, would you add properties like (IsManager property to OrgUnitJobs model) or add ManagerOrgUnitJobId to OrgUnitModel. and why? Likewise, for employees would you add IsPrimaryJob property to OrgUnitJobEmployee model, or add PrimaryJobId to Employee Model.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >