Search Results

Search found 2157 results on 87 pages for 'sequential workflow'.

Page 14/87 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Drupal workflow action access integrated with taxonomy access control?

    - by groovehunter
    hi, I am building a DMS for our intranet and use a taxonomy hierarchy because we need access control that way. All company locations manage (upload,edit) their own documents but should be able to access all. This is inherited to the child terms and works fine. Additionally we want simple 3-step workflow (draft,published,archived). So i introduced roles for editor, publisher and docadmin and set permissions for the transitions. Also triggers to effectivly (un)publish documents. But (of course) a user of role publisher can do the transition for ALL documents. But we want publisher for each company location (top taxonomy level, see above). Could this be achieved? Do i have to set it up by myself (i guess "rules" is appropriate to do this) or is there another module helping. role inheritance was a guess, but that is only about roles (naturally). "module grants" i use and checked first option. That way my thoughts are going. I hope you get my idea resp. problem. drupal 6.16 current edit: I reread the docs and found ie. http://drupal.org/node/408018 Revisioning for categorized content. Will reread that.

    Read the article

  • Advice on using .Net WorkFlow State Machine. What would you do?

    - by jlafay
    So I've been tasked at work to write windows services to replace some old legacy VB6 WinForms apps currently running as services, consistently repeating tasks day-to-day. To give some general background, they have there own state machines built in to handle decision basing and not utilizing threading. A lot of the senior developers here thought it would be worth a try to look into WorkFlow to replace the state machines rather than write my own business logic and try threading it programmaticly. So it's WF vs. the "Old College Try" I suppose. My concern is that there aren't many books on the topic, and since it was implemented in .Net I've heard very little about it being used. I brought this up at work and another developer mentioned that it's because Biz Talk never really caught on and it was designed for that. So is it broken? Do you think it will be supported long enough to not worry so much? I don't want an ill-functioning process injected into my services, my new babies at work, and then have WF's keel over. Leaving me with having to replace them with my own code in the event of an emergency; which does not seem like much of a grand scenario to me. Any suggestions, recommendations would be super.

    Read the article

  • Issues in Convergence of Sequential minimal optimization for SVM

    - by Amol Joshi
    I have been working on Support Vector Machine for about 2 months now. I have coded SVM myself and for the optimization problem of SVM, I have used Sequential Minimal Optimization(SMO) by Mr. John Platt. Right now I am in the phase where I am going to grid search to find optimal C value for my dataset. ( Please find details of my project application and dataset details here http://stackoverflow.com/questions/2284059/svm-classification-minimum-number-of-input-sets-for-each-class) I have successfully checked my custom implemented SVM`s accuracy for C values ranging from 2^0 to 2^6. But now I am having some issues regarding the convergence of the SMO for C 128. Like I have tried to find the alpha values for C=128 and it is taking long time before it actually converges and successfully gives alpha values. Time taken for the SMO to converge is about 5 hours for C=100. This huge I think ( because SMO is supposed to be fast. ) though I`m getting good accuracy? I am screwed right not because I can not test the accuracy for higher values of C. I am actually displaying number of alphas changed in every pass of SMO and getting 10, 13, 8... alphas changing continuously. The KKT conditions assures convergence so what is so weird happening here? Please note that my implementation is working fine for C<=100 with good accuracy though the execution time is long. Please give me inputs on this issue. Thank You and Cheers.

    Read the article

  • Some help needed with setting up the PERFECT workflow for web development with 2-3 guys using subver

    - by Roeland
    Hey guys! I run a small web development company along side with my brother and friend. After doing extensive research I have decided on using subversion for version control. Here is how I currently plan on running typical development. Keep in mind there are 3 of us each in a separate location. I set up an account with springloops (springloops.com) subversion hosting. Each time I work on a new project, I create a repository for it. So lets say in this case I am working on site1. I want to have 3 versions of the site on the internet: Web Development - This is the server me and the other developers publish to. (site1.dev.bythepixel.com) Client Preview - This is the server that we update every few days with a good revision for the client to see. (site1.bythepixel.com) Live Site - The site I publish to when going live (site1.com) Each web development machine (at each location) will have a local copy of xamp running virtual host to allow multiple websites to be worked on. The root of the local copy is set up to be the same as the local copy of the subversion repository. This is set up so we can make small tweaks and preview them immediately. When some work has been done, a commit is made to the repository for the site. I will have the dev site automatically be pushed (its an option in springloops). Then, whenever I feel ready to push to the client site I will do so. Now, I have a few concerns with those work flow: I am using codeigniter currently, and in the config file I generally set the root of the site. Ex. http://www.site1.com. So, it looks like each time I publish to one of the internet servers, I will have to modify the config file? Is there any way to make it so certain files are set for each server? So when I hit publish to client preview it just uploads the config file for the client preview server. I don't want the live site , the client preview site and the dev site to share the same mysql server for a variety of reasons. So does this once again mean that I have to adjust the db server info each time I push to a different site? Does this workflow make sense? If you have any suggestion please let me know. I plan for this to be the work flow I use for the next few year. I just need to put a system in place that allows for future expansion! Thanks a bunch!!

    Read the article

  • Data Flow Object Graph and An Execution Engine for it

    - by M Dotnet
    I would like to build a custom workflow engine from scratch. The input to this workflow is a data flow diagram which is composed of a series of activities connected together through lines where each each represent the data flow. Each activity can export multiple outputs. Activities are complex math functions but the logic is hidden from the user. My workflow engine job is to execute the given data flow diagram. Each activity within the data flow diagram is a custom activity and each activity can output different outputs. How do you suggest to model the data flow diagram object? I need to be able to construct the data flow diagram problematically (no need for drag and drop) but I need to display the final result graphically (for display and debugging purposes). Are there any libraries out there that I could use? Should I keep the workflow presentable as an xml? I know that there are many projects out there trying to essentially doing similar thing by building such workflow engines but I need something light weight and open source. I do not need any state machine execution engine and mine is primarily sequential workflow with fork and join capabilities. My activities are wrappable as basic C# classes and I do NOT want to use anything as heavy as .NET workflow foundation.

    Read the article

  • Efficient PuTTy workflow / configuration

    - by Adrian Ratnapala
    PuTTy is a fine SSH client, but how do you get a workflow managed as slickly as OpenSSH on Unix? My issues with PuTTy's management are: PuTTy tools are not in my PATH (easily fixable) PuTTy seems to have no equivalent of ~/.ssh, so I end have to manually choose locations for my keypairs, and then manually tell all the tools where to find them every time The private key's read permissions seem lax (I might be wrong about this, I a klutz on Windows). Pageant doesn't run by default (easily fixable?) Other programs don't reliably find pageant I suspect all of these problems can be fixed if I just get set my system up properly, and/or organise a nice workflow that fits into PuTTy's way of doing things. So can anyone share some success stories about managing PuTTy?

    Read the article

  • XSLT Transforming sequential XML to hierarchical XML

    - by Jean-Paul Smit
    I have a requirement to transform a sequential XML node list into a hierarchical, but I run into some XSLT specific knowledge gap. The input XML contains articles, colors and sizes. In the sample below 'Record1' is an article, 'Record2' represents a color and 'Record3' are the sizes. The number of colors and sizes (record2 and record3) elements can vary. <root> <Record1>...</Record1> <Record2>...</Record2> <Record3>...</Record3> <Record3>...</Record3> <Record2>...</Record2> <Record3>...</Record3> <Record3>...</Record3> <Record3>...</Record3> <Record3>...</Record3> <Record1>...</Record1> <Record2>...</Record2> <Record3>...</Record3> <Record3>...</Record3> <Record2>...</Record2> <Record3>...</Record3> <Record3>...</Record3> <Record3>...</Record3> <Record3>...</Record3> </root> All fields are on the same hierarchical level, but still I have to create this structure as output: <root> <article> -> Record1 <color> -> Record2 <size>...</size> -> Record3 <size>...</size> -> Record3 </color> <color> -> Record2 <size>...</size> -> Record3 <size>...</size> -> Record3 <size>...</size> -> Record3 <size>...</size> -> Record3 </color> </article> <article> -> Record1 <color> -> Record2 <size>...</size> -> Record3 <size>...</size> -> Record3 </color> <color> -> Record2 <size>...</size> -> Record3 <size>...</size> -> Record3 <size>...</size> -> Record3 <size>...</size> -> Record3 </color> </article> </root> I've tried to iterate the nodes sequentially but for example the 'article' (=record1) node tag needs to remain unclosed while 'color' (=record2) nodes are processed. The same counts for processing 'size' (=record3) having 'color' unclosed, but that is not allowed by XSLT. My next idea was to call a template for every article, color and size level, but I don't know how to select for example all 'record3' nodes between the current 'record2' and the next article represented by 'record1'. I've also got a limitation on the XSLT version because I need this transformation in BizTalk Server which only supports XSLT 1.0 Can someone push me in the right direction?

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • Workflow Foundation: How to create Receive activity with custom message in xaml designer?

    - by Petr Felzmann
    I need to have Receive activity which can receive my custom data. I found examples, but all use coded workflows like such public class ProcessRequest : Activity { public ProcessRequest() { Variable request = new Variable { Name = "request" }; Receive receiveRequest = new Receive { ServiceContractName = "IProcessRequest", OperationName = "Foo", CanCreateInstance = true, Content = ReceiveContent.Create(new OutArgument(request)) }; } } The main point is that Receive.Content property. It is not clear for me how I can do it in XAML designer. What I have to set in the dialog of the Content property - Message or Parameters and what to set inside those options? Thanks for the light!

    Read the article

  • CRM 2011 XAML Workflows in VS 2012

    - by AlexR
    I'm writing this knowing there's another topic like this to be found here the answer provided however does not make much sense to me and whatever I tried doesn't seem to work so I'm seeking clarification. The situation is as follows: * New CRM 2011 toolkit solution * New project added for Xaml workflow * Adding CRM "Workflow" activity causes error The error: It shows a redbox "Could not generate view for Workflow" The Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.IO.IOException: Cannot locate resource 'workflowdesigner.xaml' I have VS 2012 SP2 with the latest (June) CRM SDK Toolkit installed. I did a clean install of the toolkit so it's not "upgraded" to prevent any old assemblies being referenced. I tried the things from the referenced question and moved the assemblies over and also re-referenced the latest assemblies to make sure everything works fine. I also tried downloading a workflow XAML created in CRM and opening but it gives the same error. I also opened the "example" workflow XAML solution in the SDK, again same result.

    Read the article

  • Why does my Workflow Service (4.0) variable go null in a DoWhile Activity?

    - by jlafay
    I have a WF service that I'm trying to setup receive activities to "Subscribe" and "Unsubscribe". I'm using This WF Durable Duplex Tutorial as a basis because my service performs callbacks to clients. Basically, think of it as a chat service. I can make client calls to the two receive activities just fine. What happens is the callback address of the client is passed in to Subscribe() on the service. The address is stored as a variable in the WF service and everything looks like it would work as to be expected. When a client calls Unsubscribe(), my watch I have set on the address var during debugging shows it as null. So what gives? Here's the basic setup of my WF service layout... Everything is enveloped in a DoWhile activity. Inside of that is a Pick activity and two Pick branches. The first branch is for subscribing activities. It has a receive-sendreply activity that assigns the string passed by the client to the WF address var. The second branch handles unsubscribing. The trigger is the Request activity and the client address is again passed in. From there it goes into a sequence, starting with an If. It checks to see if the unsubscribeAddress equals the address already subscribed. If it does, then it sets the address to String.Empty and sends a success message back to the client. Why would a variable that's scoped to the enveloping DoWhile activity be implicitly assigned to null? I'm trying to get this to work so I can implement multiple client subscribers from there and work on triggers that invoke callbacks to multiple clients. CONCAT EDIT: I set a breakpoint at the DoWhile level and my var is null once Unsubscribe() is called. When Subscribe() is invoked, the watch shows a value in the var all the way through. Until I Unsubscribe() with a client. Should I be using a While Activity instead?

    Read the article

  • How to read the Web.Config file in a Custom Activity Designer in a WF4 Workflow Service

    - by Preet Sangha
    I have a WF service with a custom activity and a custom designer (WPF). I want to add a validation that will check for the presence of some value in the web.config file. At runtime I can overload void CacheMetadata(ActivityMetadata metadata) and thus I can do the validation happily there using System.Configuration.ConfigurationManager to read the config file. Since I also want to do this at design time, I was looking for a way to do this in the designer.

    Read the article

  • Which tools you use to make gtk themes?

    - by tutuca
    I'm trying to make a new gtk theme using the murrine engine, using Humanity (default in ubuntu 9.10) as a template. You can grab the code in http://github.com/tutuca/themes However, I found cumbersome the process of creating a new theme with it. There is no central starting point. The documentation of both, the engine options (gtkrc's and stuff), and general theming practices (the format of the index.theme files, folders, bla bla) is scarce, How to's and tutorials are often old or subject to lots of opinionated debate and results confusing (to me, having a web developer background, at least :-). So... I wanted to ask to the fellows gtk themers and artist out there: Which tools you use to create a new theme, and how does your average workflow looks like?

    Read the article

  • Which can I use to make GTK themes?

    - by tutuca
    I'm trying to make a new gtk theme using the murrine engine, using Humanity (default in ubuntu 9.10) as a template. You can grab the code in http://github.com/tutuca/themes However, I found cumbersome the process of creating a new theme with it. There is no central starting point. The documentation of both, the engine options (gtkrc's and stuff), and general theming practices (the format of the index.theme files, folders, bla bla) is scarce, How to's and tutorials are often old or subject to lots of opinionated debate and results confusing (to me, having a web developer background, at least :-). So... I wanted to ask to the fellows gtk themers and artist out there: Which tools you use to create a new theme, and how does your average workflow looks like?

    Read the article

  • How do you deal with web designers who are too afraid to read and touch PHP code?

    - by Lacrymology
    I've been hired to make a website and am working with a designer (who happens to be the guy who is in contact with the client and hired me, so no, I can't kick his ass out =) ) who's too afraid to touch into the php code, and is too newbie in html and css to give me good enough models, so the work of today will be going through his new html model of a half-programmed page and removing s and changing classes and the such. Is there some kind of tool, or some better workflow in order to make this easier for both of us? maybe I'm dealing with this the wrong way altogether, I'm new to web development, and I don't know enough html/css (and he supposedly does) to have him just give me a graphic mock-up and do the whole thing, so what we're doing is he gives me a static html page that looks like he wants, and I put around it =) can anyone give me some advice on this?

    Read the article

  • How do I justify upgrading to Windows Server 2008?

    - by thebunk
    We're just about to start a new greenfield project - it's a highly functional web application using ASP.NET MVC3, SQL Server etc. We're also going to be using Windows Workflow Foundation for the first time. Our client only wants to use his existing Windows Server 2003 web servers. My main issue (other than it is 8 years old) is that we don't much experierence of WWF development, but understand that using AppFabric (Server 2008 only) will improve WWF development. It's a significant cost to the client, as we need fail-over servers and a UAT environment as well. Am I correct in my understanding, and what methodologies can I use to justify the cost of upgrading?

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (I)

    - by ccasares
    Adding attachments to a HumanTask is a feature that exists in Oracle HWF (Human Workflow) since 10g. However, in 11g there have been many improvements on this feature and this entry will try to summarize them. Oracle BPM 11g 11.1.1.5.1 (aka PS4 Feature Pack or PS4FP) introduced two great features: Ability to link attachments at a Task scope or at a Process scope: "Task" attachments are only visible within the scope (lifetime) of a task. This means that, initially, any member of the assignment pattern of the Human Task will be able to handle (add, review or remove) attachments. However, once the task is completed, subsequent human tasks will not have access to them. This does not mean those attachments got lost. Once the human task is completed, attachments can be retrieved in order to, i.e., check them in to a Content Server or to inject them to a new and different human task. Aside note: a "re-initiated" human task will inherit comments and attachments, along with history and -optionally- payload. See here for more info. "Process" attachments are visible within the scope of the process. This means that subsequent human tasks in the same process instance will have access to them. Ability to use Oracle WebCenter Content (previously known as "Oracle UCM") as the backend for the attachments instead of using HWF database backend. This feature adds all content server document lifecycle capabilities to HWF attachments (versioning, RBAC, metadata management, etc). As of today, only Oracle WCC is supported. However, Oracle BPM Suite does include a license of Oracle WCC for the solely usage of document management within BPM scope. Here are some code samples that leverage the above features. Retrieving uploaded attachments -Non UCM- Non UCM attachments (default ones or those that have existed from 10g, and are stored "as-is" in HWK database backend) can be retrieved after the completion of the Human Task. Firstly, we need to know whether any attachment has been effectively uploaded to the human task. There are two ways to find it out: Through an XPath function: Checking the execData/attachment[] structure. For example: Once we are sure one ore more attachments were uploaded to the Human Task, we want to get them. In this example, by "get" I mean to get the attachment name and the payload of the file. Aside note: Oracle HWF lets you to upload two kind of [non-UCM] attachments: a desktop document and a Web URL. This example focuses just on the desktop document one. In order to "retrieve" an uploaded Web URL, you can get it directly from the execData/attachment[] structure. Attachment content (payload) is retrieved through the getTaskAttachmentContents() XPath function: This example shows how to retrieve as many attachments as those had been uploaded to the Human Task and write them to the server using the File Adapter service. The sample process excerpt is as follows:  A dummy UserTask using "HumanTask1" Human Task followed by a Embedded Subprocess that will retrieve the attachments (we're assuming at least one attachment is uploaded): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail: We've defined an XSD structure that will hold the attachments (both name and payload): Then, we can create a BusinessObject based on such element (attachmentCollection) and create a variable (named attachmentBPM) of such BusinessObject type. We will also need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a variable of type TaskExecutionData... ...and copy the HumanTask output execData to it: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentBPM variable with the name of each attachment and setting an empty value to the payload: Please note that we're using the XSLT for-each node to create as many target structures as necessary. Also note that we're setting an Empty text to the payload variable. The reason for this is to make sure the <payload></payload> tag gets created. This is needed when we map the payload to the XML variable later. Aside note: We are assuming that we're retrieving non-UCM attachments. However in real life you might want to check the type of attachment you're handling. The execData/attachment[]/storageType contains the values "UCM" for UCM type attachments, "TASK" for non-UCM ones or "URL" for Web URL ones. Those values are part of the "Ext.Com.Oracle.Xmlns.Bpel.Workflow.Task.StorageTypeEnum" enumeration. Once we have fed the attachmentsBPM structure and so it now contains the name of each of the attachments, it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsBPM/attachment[] element: In every iteration we will use a Script activity to map the corresponding payload element with the result of the XPath function getTaskAttachmentContents(). Please, note how the target array element is indexed with the loopCounter predefined variable, so that we make sure we're feeding the right element during the array iteration:  The XPath function used looks as follows: hwf:getTaskAttachmentContents(bpmn:getDataObject('UserTask1LocalExecData')/ns1:systemAttributes/ns1:taskId, bpmn:getDataObject('attachmentsBPM')/ns:attachment[bpmn:getActivityInstanceAttribute('SUBPROCESS3067107484296', 'loopCounter')]/ns:fileName)  where the input parameters are: taskId of the just completed Human Task attachment name we're retrieving the payload from array index (loopCounter predefined variable)  Aside note: The reason whereby we're iterating the execData/attachment[] structure through embedded subprocess and not, i.e., using XSLT and for-each nodes, is mostly because the getTaskAttachmentContents() XPath function is currently not available in XSLT mappings. So all this example might be considered as a workaround until this gets fixed/enhanced in future releases. Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsBPM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsBPM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target):  Second, we must set the target filename using the Service Properties dialog box:  Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Handling UCM attachments will be part of a different and upcoming blog entry. Once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • SQL Server database with clustered GUID PKs - switch clustered index or switch to sequential (comb)

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • MS SQL Database with clustered GUID PKs - switch clustered index or switch to sequential (comb) GUID

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • Good Async pattern for sequential WebClient requests

    - by Omar Shahine
    Most of the code I've written in .NET to make REST calls have been synchronous. Since Silverlight on Windows Phone only supports Async WebClient and HttpWebRequest calls, I was wondering what a good async pattern is for a Class that exposes methods that make REST calls. For example, I have an app that needs to do the following. Login and get token Using token from #1, get a list of albums Using token from #1 get a list of categories etc my class exposes a few methods: Login() GetAlbums() GetCategories() since each method needs to call WebClient using Async calls what I need to do is essentially block calling Login till it returns so that I can call GetAlbums(). What is a good way to go about this in my class that exposes those methods?

    Read the article

  • C# Nested Property Accessing overloading OR Sequential Operator Overloading

    - by Tim
    Hey, I've been searching around for a solution to a tricky problem we're having with our code base. To start, our code resembles the following: class User { int id; int accountId; Account account { get { return Account.Get(accountId); } } } class Account { int accountId; OnlinePresence Presence { get { return OnlinePresence.Get(accountId); } } public static Account Get(int accountId) { // hits a database and gets back our object. } } class OnlinePresence { int accountId; bool isOnline; public static OnlinePresence Get(int accountId) { // hits a database and gets back our object. } } What we're often doing in our code is trying to access the account Presence of a user by doing var presence = user.Account.Presence; The problem with this is that this is actually making two requests to the database. One to get the Account object, and then one to get the Presence object. We could easily knock this down to one request if we did the following : var presence = UserPresence.Get(user.id); This works, but sort of requires developers to have an understanding of the UserPresence class/methods that would be nice to eliminate. I've thought of a couple of cool ways to be able to handle this problem, and was wondering if anyone knows if these are possible, if there are other ways of handling this, or if we just need to think more as we're coding and do the UserPresence.Get instead of using properties. Overload nested accessors. It would be cool if inside the User class I could write some sort of "extension" that would say "any time a User object's Account property's Presence object is being accessed, do this instead". Overload the . operator with knowledge of what comes after. If I could somehow overload the . operator only in situations where the object on the right is also being "dotted" it would be great. Both of these seem like things that could be handled at compile time, but perhaps I'm missing something (would reflection make this difficult?). Am I looking at things completely incorrectly? Is there a way of enforcing this that removes the burden from the user of the business logic? Thanks! Tim

    Read the article

  • Java exception handling in non sequential tasks (pattern/good practice)

    - by Hernán Eche
    There are some task that should't be done in parallel, (for example opening a file, reading, writing, and closing, there is an order on that...) But... Some task are more like a shoping list, I mean they could have a desirable order but it's not a must..example in communication or loading independient drivers etc.. For that kind of tasks, I would like to know a java best practice or pattern for manage exceptions.. The java simple way is: getUFO { try { loadSoundDriver(); loadUsbDriver(); loadAlienDetectorDriver(); loadKeyboardDriver(); } catch (loadSoundDriverFailed) { doSomethingA; } catch (loadUsbDriverFailed) { doSomethingB; } catch (loadAlienDetectorDriverFailed) { doSomethingC; } catch (loadKeyboardDriverFailed) { doSomethingD; } } But what about having an exception in one of the actions but wanting to try with the next ones?? I've thought this approach, but don't seem to be a good use for exceptions I don't know if it works, doesn't matter, it's really awful!! getUFO { Exception ex=null; try { try{ loadSoundDriver(); }catch (Exception e) { ex=e; } try{ loadUsbDriver(); }catch (Exception e) { ex=e; } try{ loadAlienDetectorDriver(); }catch (Exception e) { ex=e; } try{ loadKeyboardDriver() }catch (Exception e) { ex=e; } close the file; if(ex!=null) { throw ex; } } catch (loadSoundDriverFailed) { doSomethingA; } catch (loadUsbDriverFailed) { doSomethingB; } catch (loadAlienDetectorDriverFailed) { doSomethingC; } catch (loadKeyboardDriverFailed) { doSomethingD; } } seems not complicated to find a better practice for doing that.. I still didn't thanks for any advice

    Read the article

  • Python unit-testing with nose: Making sequential tests

    - by cool-RR
    I am just learning how to do unit-testing. I'm on Python / nose / Wing IDE. (The project that I'm writing tests for is a simulations framework, and among other things it lets you run simulations both synchronously and asynchronously, and the results of the simulation should be the same in both.) The thing is, I want some of my tests to use simulation results that were created in other tests. For example, synchronous_test calculates a certain simulation in synchronous mode, but then I want to calculate it in asynchronous mode, and check that the results came out the same. How do I structure this? Do I put them all in one test function, or make a separate asynchronous_test? Do I pass these objects from one test function to another? Also, keep in mind that all these tests will run through a test generator, so I can do the tests for each of the simulation packages included with my program.

    Read the article

  • Custom Display Form with Custom Workflow button

    - by Ifi
    I have created a new custom list form that will show 4 fields on the page from a Custom List called "Shipment". I have added Form Action button that I would like to run a custom action that is set inside of a Workflow. Currently, the form displays the fields for "Manifest Number", "Pickup Location", "Delivery Location", & "Scheduled Pickup Time". When the user clicks the Form Action button, what I want the Workflow to do is go to the ID field of the displayed content in the Form and change the value of the "Picked Up" column from No to Yes. The problem I am having is passing the ID of the displayed information from the Form to the Workflow as a variable. I can get the "Picked Up" column to update if I specify the value in the "Update List Item" window under the "Find the List Item" section, but I cannot figure out how to do this dynamically from the Form

    Read the article

  • Hooking the http/https protocol in IE causes GET requests to be sequential

    - by watsonmw
    I'm using the PassthruAPP method to hook into HTTP/HTTPS requests made by IE. It's working well for the most part, however I noticed a problem. Only one download thread is active at a time. I can see two IInternetProtocol objects getting created, but IE uses only one at a time. This is happening with IE7. The odd thing is that the problem occurs when overriding the existing default HTTP/HTTPS handler, even if the handler is not the one being used to make the request. E.g. Registering a handler for the HTTPS protocol will cause HTTP requests to be made sequentially, even though HTTP requests are not hooked. I installed Google Gears and it has the same problem. This always happens for the first few items on the page, but it seems that after the document complete is issued, concurrent downloads can occur again. For example Javascript code that is executed after the page has finished loading can load images concurrently just fine. One option is to try to IAT patch the 'IInternetProtocol' registered for HTTP requests, but Google Gears does this already and it has the same problem. I know installing a HTTP Proxy is another option, but I don't want to monkey with the users' HTTP Proxy settings if there another option.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >