Search Results

Search found 1678 results on 68 pages for 'nintex workflow'.

Page 57/68 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to get around the jslint error 'Don't make functions within a loop.'

    - by Ernelli
    I am working on making all of our JS code pass through jslint, sometimes with a lot of tweaking with the options to get legacy code pass for now on with the intention to fix it properly later. There is one thing that jslint complains about that I do not have a workround for. That is when using constructs like this, we get the error 'Don't make functions within a loop.' for (prop in newObject) { // Check if we're overwriting an existing function if (typeof newObject[prop] === "function" && typeof _super[prop] === "function" && fnTest.test(newObject[prop])) { prototype[prop] = (function (name, func) { return function () { var result, old_super; old_super = this._super; this._super = _super[name]; result = func.apply(this, arguments); this._super = old_super; return result; }; })(prop, newObject[prop]); } This loop is part of a JS implementation of classical inheritance where classes that extend existing classes retain the super property of the extended class when invoking a member of the extended class. Just to clarify, the implementation above is inspired by this blog post by John Resig. But we also have other instances of functions created within a loop. The only workaround so far is to exclude these JS files from jslint, but we would like to use jslint for code validation and syntax checking as part of our continuous integration and build workflow. Is there a better way to implement functionality like this or is there a way to tweak code like this through jslint?

    Read the article

  • Issue with Facebook JS API, revokeAuthorization

    - by BBonifield
    I am trying to integrate FB connect into our user profile screen. Although, I'm having an issue with FB.ApiClient.revokeAuthorization. http://pastie.org/921942 The basic problem is that I revoke the auth at line 44 after the user clicks the disconnect button. After that, all subsequent API calls don't have a valid session to even check user status. I've tried wrapping blocks in a FB.Connect.forceSessionRefresh block, but then the code will never be called at all. I'm not sure what the proper workflow should be for this purpose. Right now it's basically... User arrives at profile NOT connected to the application. User clicks on the connect button. Once connected, DOM manipulation occurs to hide the connect button and add in a disconnect button. User clicks on the disconnect button. User's authorization to the application is revoked and (it seems) the API session to the FB server is invalidated. DOM manipulation occurs to hide the disconnect button and readd the connect button. User clicks on the connect button. Once connected, the FB.Connect.get_loggedInUser() don't return the actual user.

    Read the article

  • PubSubHubBub Hubs

    - by PartlyCloudy
    Hi, I'm currently building a live web application based upon the PubSubHubBub protocol. However, I encountered several issues. First, I'm in search of a hub application that I can run on my server. There are several applications, but most of them are not mature yet, or they don't support the 0.3 spec. The official google hub runs on the Google App Engine and can even be executed locally. Unfortunately, "Tasks will not run automatically. Push the 'Run' button to execute each task." This behaviour is useful for debugging and understanding the workflow, but in some live tests, it would be nice not to invoke all tasks manually. Is there a way to tweak the local app engine due automatically run tasks? Next, I have a question concerning the spec itself. The Google reference implementation provides the initial publish method bound to the outpoint uri + /publish. But this is not reflected in the specs. So are there any mature hubs that can be run locally for debugging? Or are there ways to configure the offical google app engine hub to run locally and to execute tasks directly? Thanks in advance

    Read the article

  • Documentation and Build system for Mono/C#

    - by dcolish
    I'm starting out on a new project and a team member has decided to use C# as the implementation language. I don't have a lot of experience in C#, but a brief reading shows that it's very capable of being a complete cross-platform vm. Beyond the language, I've been having trouble selecting tools and workflows for managing the code as the project grows. It should be fairly small (<10K lines) but I would like to have the ability to generate documentation as the project grows, manage any external dependencies that we decide to use, and automate builds and testing. I am wondering what tools are commonly used or considered best practices for this language. I am mainly concerned with how would a build system potentially work on *nix as well as windows? Are there C# specific tools or is Make more common? In addition, I'd like to use a dvcs, but it doesn't look like Visual Studio and MonoDevelop support the same ones. What's the common vcs of choice for C#? For testing sort of Unit testing is available for C#/Mono? Finally, I know that there are good doc generators, but with the question of the build system, I would really like to have that just be a single step in the build similar to how testing is a step. Normally I'd automate with Hudson, but I am wondering if there is something more specific to the platform. Overall, I'd love to see a solution that provides a decent workflow on both windows and *nix without a heavy admin burden. I am pretty sure this is the holy grail of project management, so anything that puts me on that path is awesome.

    Read the article

  • Linking AS code to symbols defined in an external SWC?

    - by Ender
    (apologies ahead of time, I only really know Flash; my Flex experience is basically nil. There may be a very standard and obvious workflow solution that Flex people know about) I have a number of UI elements that are graphically quite complex (they're not components, they're just Sprites). Since it takes a long time to compile them, I've been trying to move them into an external .swc. However, I want to associate some code with these classes, but I don't want to have to recompile the graphical assets every time I make a code change. At the moment I have it set up like this: UI elements are created in a separate FLA and exported to a SWC. In my primary FLA, I have actionscript classes that extend each of the graphical assets in the SWC. For example: external.swc: (some symbol defined in the Library and exported for actionscript in frame 1) class: com.foo.WidgetGraphic base: flash.display.Sprite main.fla: Widget.as: package com.foo { public class Widget extends WidgetGraphic { ... } } This works, but is time-consuming and prone to error. I'd rather be able to avoid having to inherit from each graphical asset, and just define them directly. Is there a better way to do what I'm trying to accomplish? Note: the main concern here is compile time. I don't have any movies or audio or fonts, just a lot of vector art assets that appear to be slowing down my compilation time significantly. When I'm debugging I'm only making code changes, and would rather not have to keep recompiling the art...

    Read the article

  • Authenticating GTK app to run with root permissions

    - by Thomas Tempelmann
    I have a UI app (uses GTK) for Linux that requires to be run as root (it reads and writes /dev/disk*). Instead of requiring the user to open a root shell or use "sudo" manually every time when he launches my app, I wonder if the app can use some OS-provided API to ask the user to relaunch the app with root permissions. (Note: gtk app's can't use "setuid" mode, so that's not an option here.) The advantage here would be an easier workflow: The user could, from his default user account, double click my app from the desktop, and the app then would relaunch itself with root permission after been authenticated by the API/OS. I ask this because OS X offers exactly this: An app can ask the OS to launch an executable with root permissions - the OS (and not the app) then asks the user to input his credentials, verifies them and then launches the target as desired. I wonder if there's something similar for Linux (Ubuntu, e.g.) Update: The app is a remote operated disk repair tool for the unsavvy Linux user, and those Linux noobs won't have much understanding of using sudo or even changing their user's group memberships, especially if their disk just started acting up and they're freaking out. That's why I seek a solution that avoids technicalities like this.

    Read the article

  • Which SCM/VCS cope well with moving text between files?

    - by pfctdayelise
    We are having havoc with our project at work, because our VCS is doing some awful merging when we move information across files. The scenario is thus: You have lots of files that, say, contain information about terms from a dictionary, so you have a file for each letter of the alphabet. Users entering terms blindly follow the dictionary order, so they will put an entry like "kick the bucket" under B if that is where the dictionary happened to list it (or it might have been listed under both B, bucket and K, kick). Later, other users move the terms to their correct files. Lots of work is being done on the dictionary terms all the time. e.g. User A may have taken the B file and elaborated on the "kick the bucket" entry. User B took the B and K files, and moved the "kick the bucket" entry to the K file. Whichever order they end up getting committed in, the VCS will probably lose entries and not "figure out" that an entry has been moved. (These entries are later automatically converted to an SQL database. But they are kept in a "human friendly" form for working on them, with lots of comments, examples etc. So it is not acceptable to say "make your users enter SQL directly".) It is so bad that we have taken to almost manually merging these kinds of files now, because we can't trust our VCS. :( So what is the solution? I would love to hear that there is a VCS that could cope with this. Or a better merge algorithm? Or otherwise, maybe someone can suggest a better workflow or file arrangement to try and avoid this problem?

    Read the article

  • Saving NSBitmapImageRep as NSBMPFileType file. Wrong BMP headers and bitmap content

    - by niko34
    I save a NSBitmapImageRep to a BMP file (Snow Leopard). It seems ok when i open it on macos. But it makes an error on my multimedia device (which can show any BMP file from internet). I cannot figure out what is wrong, but when i look inside the file (with the cool hexfiend app on macos), 2 things wrong: the header have a wrong value for the biHeight parameter : 4294966216 (hex=C8FBFFFF) the header have a correct biWidth parameter : 1920 the first pixel in the bitmap content (after 54 bytes headers in BMP format) correspond to the upper left corner of the original image. In the original BMP file and as specified in the BMP format, it should be the down left corner pixel first. To explain the full workflow in my app, i have an NSImageView where i can drag a BMP image. This View is bind to an NSImage. After a drag & drop i have an action to save this image (with some text drawing over it) to a BMP file. Here's the code for saving the new BMP file : CGColorSpaceRefcolorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); CGContextRefcontext = CGBitmapContextCreate(NULL, (int)1920, (int)1080, 8, 4*(int)1920, colorSpace, kCGImageAlphaNoneSkipLast); [duneScreenViewdrawBackgroundWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) needScale:NO]; if(folderType==DXFolderTypeMovie) { [duneScreenViewdrawSynopsisContentWithDuneFolder:self inContext:context inRect:NSMakeRect(0,0,1920,1080) withScale:1.0]; } CGImageRef backgroundImageRef = CGBitmapContextCreateImage(context); NSBitmapImageRep*bitmapBackgroundImageRef = [[NSBitmapImageRepalloc] initWithCGImage:backgroundImageRef]; NSData*data = [destinationBitmap representationUsingType:NSBMPFileType properties:nil]; [data writeToFile:[NSStringstringWithFormat:@"%@/%@", folderPath,backgroundPath] atomically: YES]; The duneScreenViewdrawSynopsisContentWithDuneFolder method uses CGContextDrawImage to draw the image. The duneScreenViewdrawSynopsis method uses CoreText to draw some text in the same context. Do you know what's wrong?

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • Using MSBuild 4 command line to publish ASP.NET web application

    - by meandmycode
    In previous msbuild we used the target '_CopyWebApplication' in order to build and convert the source of a project into a published site, this worked OK, but wasn't ideal. In .NET 4, the publishing process is somewhat more sophisticated and additionally seems a bit of a black box to understand. Whilst packages look great, I cannot fully understand how they can be harnessed by a build server, the build server would not get any manifest information, and equally, something (msbuild?) is CREATING this manifest information FROM the project file. In our build server, I ideally want to say, here is my csproj file, deploy it by the package configuration 'x'. I'm trying to understand the workflow I need to make this happen. Right now when I use _CopyWebApplication, the result is different to doing a publish from visual studio 2010, primarily that web.config transforms aren't processed, and obviously msdeploy isn't involved at all. Can somebody point me in the right direction, I believe I need to get msbuild to do the equiv of 'Build Deployment Package', and then use msdeploy to deploy this from our build server to our CI testing environments. I know this is a very vague post, but I hope somebody can give me some hints, I'll be continuing research also, so if I make any progress, I'll post my findings here. Thanks in advance, Stephen.

    Read the article

  • How to properly develop and deploy features for existing asp.net applications on IIS

    - by Tomh
    My question actually consists of multiple questions. I'm frequently reading about companies who deploy a small subset of features for a select amount of customers using the live "database". Ruby on Rails and its ecosystem have deployment tools and database migrations to deploy or rollback such features in a live production or staging environment. My question, how is this done for an asp.net (mvc in particular) application? How do you test your newly released features against live data? Do you have any tools to modify the existing database and roll back changes if necessary? Do you make backups before deployment? Update Maybe I should point out that my question is not really clear, getting more answers here will help me phrase the question better. To make it easier I will describe a situation I'm commonly seeing with some of my clients. My clients have large deployments of popular web applications. They do not have staging/QA/testing servers. (yes this is not optimal). The data their apps consist of are images, xml files, user uploads and data in Sql Server. Having a few records, of their production database and a couple of dummy files is not a substitute of testing against real data in my opinion. How would you design a workflow that can create a acceptable environment to mimic a production environment before going live?

    Read the article

  • Should I go vor Arrays or Objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips? Edit: Kenaniah's answer suggests I should go the other way, this would mean I'd cast everything to arrays. And for all the Ajax / JSON stuff and also for CouchDB I would use $myarray = json_decode($json_data,$assoc = false) Even more work to change all the CouchDB and Ajax functions but in the end I have better code.

    Read the article

  • Allow selection of readonly files from SaveFileDialog?

    - by Andy Dent
    I'm using Microsoft.Win32.SaveFileDialog in an app where all files are saved as readonly but the user needs to be able to choose existing files. The existing files being replaced are renamed eg: blah.png becomes blah.png-0.bak, blah.png-1.bak and so on. Thus, the language for the OverwritePrompt is inappropriate - we are not allowing them to overwrite files - so I'm setting dlog.OverwritePrompt = false; The initial filenames for the dialog are generated based on document values so for them, it's easy - we rename the candidate file in advance and if the user cancels or chooses a different name, rename it back again. When I delivered the feature, testers swiftly complained because they wanted to repeatedly save files with names different from the agreed workflow (those goofy, playful guys!). I can't figure out a way to do this with the standard dialogs, in a way that will run safely on both XP (for now) and Windows 7. I was hoping to hook into the FileOK event but that is called after I get a warning dialog: |-----------------------------------------| | blah.png | | This file is set to read-only. | | Try again with a different file name. | |-----------------------------------------|

    Read the article

  • Sharing session variables from http and https versio

    - by tangurena
    I am trying to fix an ASP.NET site that a friend had botched converting from older technologies. To the user, the site appears to have public and secured sections. Behind the scenes, the public and private sites are separate web applications with separate app pools. The difficulty arises because it appears that the applications share the same session IDs (when going from the public to the secured pages, the session ID remains the same), yet none of the (InProc) session variables are getting passed from the public site to the private one. Basically, the workflow consists of the user checking a checkbox ("I agree" type of stuff) on the public site (let's call that page http://www.boring.gov/iAgree.aspx), then logging in on the secured site (let's call that page https://www.boring.gov/login.aspx). The commandments from the parent agency in DC are that the user may not bookmark the login page, the user has to click "I agree" every time they log in, and that the "I agree" stuff has to be on a separate page. What am I missing? How would you do it? Notes: 1 - This is getting hosted on a single Windows 2003 server. 2 - Yes, it is a government agency. 3 - I would have done things very differently if I was doing the conversion, but I wasn't brought in until the poop hit the fan, and it is too late to redo things. 4 - Two previous SO threads that appear to be related, yet don't apply are this and that

    Read the article

  • calling .ajax() from an eventHandler c# asp.ent

    - by ibininja
    Good day...! In the code behind (upload.aspx) I have an event that returns the number of bytes being streamed; and as I debug it, it works fine. I wanted to reflect the numbers returned from the eventHandler on a progress bar and this is where I got lost. I tried using jQuery's .ajax() function. this is how I implemented it: In the EventHandler in my code behind I added this code to call the .ajax() function: Page.ClientScript.RegisterStartupScript(this.GetType(), "UpdateProgress", "<script type='text/javascript'>updateProgress();</script>"); My plan is whenever the eventHandler function changes the values of bytes being streamed it calls the javascript function "updateProgress()" The .ajax() function "UpdateProgress()" is as: function updateProgress() { $.ajax({ type: "POST", url: "upload.aspx/GetData", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", async: true, success: function (msg) { $("#progressbar").progressbar("option", "value", msg.d); } }); } I made sure that the function GetData() is [System.Web.Services.WebMethod] and that it is static as well. so the workflow of what I am trying to implement is as: - Click On Upload button - The Behind code starts executing and EventHandler triggers - The EventHandler calls .ajax() function - The .ajax() function retrieves the bytes being streamed and updates progress bar. When I ran the code; all runs well except that the .ajax() is only executed when upload is finished (and progress bar also updates only when finished upload); even though I call .ajax() function every time in the eventHandler function as reflected above... What am I doing wrong? Am I thinking of this right? is there anything else I should add maybe an updatePanel or something? thank you

    Read the article

  • Will IntelliTrace(tm) (historical debugging) be available for unmanaged c++ in future versions of Vi

    - by Tim
    I love the idea of historical debugging in VS 2010. However, I am really disappointed that unmanaged C++ is left out. IntelliTrace supports debugging Visual Basic and C# applications that use .NET version 2.0, 3.0, 3.5, or 4. You can debug most applications, including applications that were created by using ASP.NET, Windows Forms, WPF, Windows Workflow, and WCF. IntelliTrace does not support debugging C++, script, or other languages. Debugging of F# applications is supported on an experimental basis. (editorial) [This is really poor support in my opinion. .NET is less in need of this assistance than unmanaged c++. I an getting a little tired of the status of plain old C++ and its second-class status in the MS tools world. Yes, I realize it is probably WAAY easier to implement this with .NET and MS are pushing .NET as the future, and yes, I know that C++ is an "old" language, but that does not diminish the fact that there are lots of C++ apps out there and there will continue to be more apps built with C++. I sincerely hope MS has not dropped C++ as a supported developer tool/language- that would be a shame.] Does anyone know if there are plans for it to support C++?

    Read the article

  • TFS 2010 Build gives WorkItemStore error when Create Work Item on Failure is enabled

    - by Derek Morrison
    I'm using TFS 2010 Build. I have a build definition that uses the DefaultTemplate.xaml template that's stock in TFS 2010, and the Create Work Item on Failure property is set to True in the build definition. I deliberately made a change in my project that breaks the build. When the build runs, I see the compilation error reflected in the TFS Build log within Visual Studio, but I get the error "Value cannot be null. Parameter name: WorkItemStore" when TFS Build next tries to generate a Work Item for the broken build. I tracked down the activity in DefaultTemplate.xaml (see the rather lengthy path to it below) where the Work Item is created for a broken build, and I see it uses the Microsoft.TeamFoundation.Build.Workflow.Activities.OpenWorkItem class to create the Work Item. The appropriate values seemed to be filled out in the Properties window for the Create Work Item activity, so I don't see where I can pass WorkItemStore to it and I don't even know appropriate values for this setting. Path to the Create Work Item activity: Process Sequence Run On Agent Try Compile, Test, and Associate Changesets and Work Items Sequence Compile, Test, and Associate Changesets and Work Items Try Compile and Test Compile and Test For Each Configuration in BuildSettings.PlatformConfigurations Compile and Test for Configuration If BuildSettings.HasProjectsToBuild For Each Project in BuildSettings.ProjectsToBuild Try to Compile the Project Handle Exception If CreateWorkItem Create Work Item for non-Shelveset Builds Create Work Item

    Read the article

  • Dependency between operations in scala actors

    - by paradigmatic
    I am trying to parallelise a code using scala actors. That is my first real code with actors, but I have some experience with Java Mulithreading and MPI in C. However I am completely lost. The workflow I want to realise is a circular pipeline and can be described as the following: Each worker actor has a reference to another one, thus forming a circle There is a coordinator actor which can trigger a computation by sending a StartWork() message When a worker receives a StartWork() message, it process some stuff locally and sends DoWork(...) message to its neighbour in the circle. The neighbours do some other stuff and sends in turn a DoWork(...) message to its own neighbour. This continues until the initial worker receives a DoWork() message. The coordinator can send a GetResult() message to the initial worker and wait for a reply. The point is that the coordinator should only receive a result when data is ready. How can a worker wait that the job returned to it before answering the GetResult() message ? To speed up computation, any worker can receive a StartWork() at any time. Here is my first try pseudo-implementation of the worker: class Worker( neighbor: Worker, numWorkers: Int ) { var ready = Foo() def act() { case StartWork() => { val someData = doStuff() neighbor ! DoWork( someData, numWorkers-1 ) } case DoWork( resultData, remaining ) => if( remaining == 0 ) { ready = resultData } else { val someOtherData = doOtherStuff( resultData ) neighbor ! DoWork( someOtherData, remaining-1 ) } case GetResult() => reply( ready ) } } On the coordinator side: worker ! StartWork() val result = worker !? GetResult() // should wait

    Read the article

  • Git as mercurial client? Why no git-hg?

    - by aapeli
    This is a question that's been bothering me for a while. I've done my homework and checked stackoverflow and found at least these two topics about my question: Git for Mercurial like git-svn and Git interoperability with a Mercurial repository I've done some serious googling to solve this issue, but so far with no luck. I've also read the Git Internals book, and the Mercurial Definitive Behind the Scenes to try to figure this out. I'm still a bit puzzled why I haven't been able to find any suitable git-hg type of a tool. From my perspective hg-svn is one of the main features, why I've chosen to use git over mercurial also at work. It allows me to use a workflow I like, and nobody else needs to bother, if they don't care. I just don't see the point in using the intermediate hg repo to convert back and forth, as suggested in one of the chains. So anyway, from what I've read hg and git seem very similar in conceptual design. There are differences under the hood, but none of those should prevent creating a git client for hg. As it seems to me, remote tracking branches and octopus merges make git even more powerful than hg is. So, the real question, is there any real reason why git-hg does not exist (or at least is very hard to find)? Is there some animosity from git users (and developers) towards their hg counterparts that has caused the lack of the git-hg tool? Do any of you have any plans to develop something like this, and go public with it? I could volunteer (although with very feeble C-skills) to participate to get this done. I just don't possess the full knowledge to start this up myself. Could this be the tool to end all DVCS wars for good?

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • Git force complete sync to master

    - by Jesse
    My workplace uses Subversion for source control so I have been playing around with git-svn for the advantages of my own branches, commit as often as I want without touching the main repo, etc. Since my git svn checkout is local, I have cloned it to a network share as well to act as a backup. My thinking is that if my desktop takes a dump I will at least have the repo on the network share to get changes that I have not had a chance to dcommit yet. My workflow is to work from the desktop, make changes, commit, etc. At the end of the day I want to update the repo on the network share with all of my current changes. I had setup the repo on the network share using git clone repo_on_my_desktop and then updating the repo on the network share with git pull origin master. The problem that I am running into is when I used do a git rebase to squish multiple commits prior to dcommitting to the main svn repository. When I do this, I get merge conflicts on the repo on the network share when I try to backup at night. Is there a way to simply sync entirely with the repository on my desktop without doing a new git clone each night?

    Read the article

  • How to search a PDF in Acrobat Reader AND jump to a certain page via parameter?

    - by agez
    Hi, we are using lucene within a web application to search in a great number of PDF documents. The workflow is like this: A user enters a search term A list of search results is presented to the user. Each search result represents one PDF document and shows the user on which page the search term was found. Each of these pages is represented as a hyperlink. If the user now clicks on such a hyperlink, he directly jumps to that page. But now the user has the problem that the search term isn't highlighted on the page. Therefore the user has to look on his own to find the search term on the page. What we wanted is a way to highlight the search term on the specific page in the PDF. The open parameters for Acrobat Reader allow for either searching a PDF document (with hit highlighting) OR jumping to a specific page. But the combination of both parameters - which we would need - doesn't work. Does anyone have an idea how jumping to a page and highlighting a search term in a pdf document could work? I had a look at the Acrobat SDK but don't see how we can use it (it's terribly documented). Cheers, Helmut

    Read the article

  • Handling data update/freshness issue in web-app in general (or GWT specifically)

    - by edwin.nathaniel
    In general, how do you guys handle user update/data freshness interaction with the user (UI issue) in web-apps? For example: Multi-users web-app (like project management) Login to a "virtual" space People can update project names, etc How to handle a situation such that: user-A and user-B load a project with title "Project StackOverflow" user-B updates the title to "Project StackExchange" user-A updates the title after user-B update operation to "Project Basecamp" The question I'm asking is from the user perspective (UI) and not about transactional operation. What do most people do in this situation? What would you do after user-B updates the title in user-A screen/view? What happened when user-A tries to update the title after user-B finished his/her update operation? do you inform user-A that the title has changed and he/she has to reload the page? do you go ahead and change the title and let user-B has old data? Do you do some sort of application-level "locking" mechanism? (if someone is updating, nobody else can?) Or fix the application workflow? (who has the access to be able to change things, etc). What would be the simplest solution, but at the same time not annoy the user with more dialog/warning messages. I've encountered this particular problem frequently in a GWT app specifically where domain models are being passed around and refreshing the whole app/client-side isn't the optimal solution in my mind (since it means the whole "loading"/initialization phase must be executed again in this specific environment). Maybe the answer is to stay away from GWT? :) Love to hear options, solutions, and advises from you guys. Thanks

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • IE browser script to determine which (if any) ActiveX control will handle specific mime type

    - by Jay13
    I'm trying to figure out in an IE script (javascript or vbscript) which ActiveX control will handle a specific mime type, "image/tiff" in this case. This is easy to do in other browsers that use plugins with; navigator.mimeTypes["image/tiff"].enabledPlugin.name which would return something like QuickTime Plug-in X.X.X I've found plenty of examples to tell if a specific plugin is loaded but since there are several plugins available that can handle tiff images I need to know which, if any, is registered to handle this mime type. The problem I'm trying to deal with is that QuickTime always wants to register itself as the default tiff viewer but it does a terrible job of it resulting in lots of support calls. Unfortunately, simply detecting that QuickTime is installed isn't good enough since the user may also have another tiff viewer installed (like Alternatiff) as the default tiff viewer or the user may have configured QuickTime to not be the default viewer for tiff images so the browser could be using a helper app to display the image instead. Not meaning to be difficult but before anyone suggests reengineering workarounds; yes I know I could force the user to use a specific ActiveX viewer in IE or to use a Java tiff viewer but I'd rather let them use a viewer of their choice rather than forcing them to install a viewer of my choosing, especially since their viewer may be a helper app that loads the tiff image into a business workflow within their office yes I know there are other image formats that I could use but tiff is the defacto standard for document imaging and that's what the vast majority of these users prefer to use. The problem isn't the image format, it's that QuickTime just doesn't cut it as a tiff viewer Thanks in advance for any suggestions or solutions...

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >