Search Results

Search found 10621 results on 425 pages for 'task queue'.

Page 95/425 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • Synchronizing screencasting (ffmpeg) and capturing from the webcam (OpenCV)

    - by lyuba
    As from my previous questions, I am trying to build a simple eye tracker. Decided to start from a Linux version (run Ubuntu). To complete this task one should organize screencasting and webcam capturing in such way that frames from both streams exactly match each other and there is the same number of frames in each of them totally. Screencasting fsp fully depends on the camera's fsp, so each time we get the image from the webcam we can potentially grab a screen frame and stay happy. However, all the tools for the fast screencasting, like ffmpeg, for example, return the .avi file as the result and require the fsp already known to be started. From the other side, tools like Java+Robot or ImageMagick seem to require around 20ms to return the .jpg screenshot, which is pretty slow for the task. But they may be requested right after each time the webcam frame is grabbed and provide the needed synchronization. So the sub-questions are: 1. Does the USD camera's frame rate vary during a single session? 2. Are there any tools which provide fast screencasting frame by frame? 3. Is there any way to make ffmpeg push a new frame to the .avi file only when program initiates this request? For my task I may either use C++ or Java. I am, actually, an interface designer, not the driver programmer, and this task seems to be pretty low-level. I would be grateful for any suggestion and tip!

    Read the article

  • SQL Server 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • Fabric "TypeError: not all arguments converted during string formatting"

    - by Brian Carpio
    I have the following fabric task: @task def deploy_west_ec2_ami(name, puppetClass, size='m1.small', region='us-west-1', basedn='joe', ldap='arch-ldap-01', secret='secret', subnet='subnet-d43b8ab d', sgroup='sg-926578fe'): execute(deploy_ec2_ami, name='%s',puppetClass='%s',size='%s',region='%s',basedn='%s',ldap='%s',secret='%s',subnet='%s',sgroup='%s' %(name, puppetClass , size, region, basedn, ldap, secret, subnet, sgroup)) However when I run the command: fab deploy_west_ec2_ami:test,java I get the following Traceback: Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/fabric/main.py", line 710, in main *args, **kwargs File "/usr/local/lib/python2.6/dist-packages/fabric/tasks.py", line 321, in execute results['<local-only>'] = task.run(*args, **new_kwargs) File "/usr/local/lib/python2.6/dist-packages/fabric/tasks.py", line 113, in run return self.wrapped(*args, **kwargs) File "/home/bcarpio/Projects/githubenterprise/awsdeploy/fabfile.py", line 35, in deploy_west_ec2_ami execute(deploy_ec2_ami, name='%s',puppetClass='%s',size='%s',region='%s',basedn='%s',ldap='%s',secret='%s',subnet='%s',sgroup='%s' %(name, puppetClass, size, region, basedn, ldap, secret, subnet, sgroup)) TypeError: not all arguments converted during string formatting I am not sure I understand why. I am pretty sure I have all the values defined here just fine. Also when I run the execute task deploy_ec2_ami as so: deploy_ec2_ami:test,java,m1.small,us-west-1,'dc\=test\,dc\=net',ldap-01,secret,subnet-d43b8abd,sg-926578fe It works just fine

    Read the article

  • lock shared data using c#

    - by menacheb
    Hi, I have a program (C#) with a list of tests to do. Also, I have two thread. one to add task into the list, and one to read and remove from it the performed tasks. I'm using the 'lock' function each time one of the threads want to access to the list. Another thing I want to do is, if the list is empty, the thread who need to read from the list will sleep. and wake up when the first thread add a task to the list. Here is the code I wrote: ... List<String> myList = new List(); Thread writeThread, readThread; writeThread = new Thread(write); writeThread.Start(); readThraed = new Thread(read); readThread.Start(); ... private void write() { while(...) { ... lock(myList) { myList.Add(...); } ... if (!readThread.IsAlive) { readThraed = new Thread(read); readThread.Start(); } ... } ... } private void read() { bool noMoreTasks = false; while (!noMoreTasks) { lock (MyList)//syncronize with the ADD func. { if (dataFromClientList.Count > 0) { String task = myList.First(); myList.Remove(task); } else { noMoreTasks = true; } } ... } readThread.Abort(); } Apparently I did it wrong, and it's not performed as expected (The readTread does't read from the list). Does anyone know what is my problem, and how to make it right? Many thanks,

    Read the article

  • How to start writing out an existing AudioQueue in response to an event?

    - by Halle
    Hello, I am writing a class that opens an AudioQueue and analyzes its characteristics, and then under certain conditions can begin or end writing out a file from that AudioQueue that is already instantiated. This is my code (entirely based on SpeakHere) that opens the AudioQueue without writing anything out to tmp: void AQRecorder::StartListen() { int i, bufferByteSize; UInt32 size; try { SetupAudioFormat(kAudioFormatLinearPCM); XThrowIfError(AudioQueueNewInput(&mRecordFormat, MyInputBufferHandler, this, NULL, NULL, 0, &mQueue), "AudioQueueNewInput failed"); mRecordPacket = 0; size = sizeof(mRecordFormat); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription, &mRecordFormat, &size), "couldn't get queue's format"); bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); for (i = 0; i < kNumberRecordBuffers; ++i) { XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed"); XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL), "AudioQueueEnqueueBuffer failed"); } mIsRunning = true; XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed"); } catch (CAXException &e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); } } But I'm a little unclear on how to write a function that will tell this queue "from now until the stop signal, start writing out this queue to tmp as a file". I understand how to tell an AudioQueue to write out as a file at the time that it's created, how to set files format, etc, but not how to tell it to start and stop midstream. Much appreciative of any pointers, thanks.

    Read the article

  • swfupload session problem destroy session

    - by saquib
    Hello Friends, I have a problem with swfupload. I am passing session_id() like this /upload-file.php?s=189477fcfa1ec7f630e70a09e1e84cae but its not maintaining session and destroying my current session (logging me out) here is code in file upload. <?php if(isset($_GET['s'])) { session_id($_GET['s']); session_start(); require_once 'admin/class/user.php'; $u = new User(); //Check for user logged in if($u->islogged() == FALSE) { header("location: index.php"); exit(); code continue ..... } because am not logged in server redirect me to the index.php this is swfupload debugger window output SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: *.jpg SWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list... SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0 SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1 SWF DEBUG: StartUpload: First file in queue SWF DEBUG: Event: uploadStart : File ID: SWFUpload_0_0 SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /upload-file.php?s='189477fcfa1ec7f630e70a09e1e84cae' for File ID: SWFUpload_0_0 SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_0_0 SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 317793. Total: 317793 SWF DEBUG: Event: uploadError: HTTP ERROR : File ID: SWFUpload_0_0. HTTP Status: 302. SWF DEBUG: Event: uploadComplete : Upload cycle complete. SWF DEBUG: StartUpload: First file in queue SWF DEBUG: StartUpload(): No files found in the queue.

    Read the article

  • SSIS Transaction with Sql Transaction

    - by Mike
    I started with a package to make sure Transactions are working correctly. The package level transaction is set to Required. I have two Execute Sql Task, one deletes rows from a table and one does 1/0, to throw the error. Both task are set to supported transaction level and Serializable IsolationLevel. That works. Now when I replace my two sql task to two separate procedure calls, the first one, ChargeInterest, runs successful but the second one, PaymentProcess, fails always saying. [Execute SQL Task] Error: Executing the query "Exec [proc_xx_NotesReceivable_PaymentProcess] ..." failed with the following error: "Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. PaymentProcess being the second stored procedure. Both procedures have there own BEGIN, COMMIT AND ROLLBACKS inside the SP. I believe that the transactions are being successfully handed in the Charge Interest because I can run the following without issues or the dreaded you started with 0 and now have 1 transaction. EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 --OR GO BEGIN TRAN EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 ROLLBACK TRAN Now I have noticed that DTC does get kicked off in both instances? Why I am not sure because it is using the same connection. In the live example I can see the transaction get started but disappears if I put a breakpoint on the PreExecute event of the second stored procedure. What is the correct way to mingle SP transactions with SSIS transactions?

    Read the article

  • UIActivityIndicatorView in a class without a view

    - by Structurer
    Hi I have defined a class that does a lengthy task and I call it from several other classes. Now I want to show an Activity Indicator while this task is doing it's thing, and then remove it once it's done. Since this is just a boring background task, this class doesn't have a view, and I guess that is where I run into my problem. I can't get this thing to show. This is what I have done in my class: UIActivityIndicatorView *activityIndicator = [[UIActivityIndicatorView alloc] initWithFrame:CGRectMake(0.0f, 0.0f, 32.0f, 32.0f)]; [activityIndicator setCenter:CGPointMake(160.0f, 208.0f)]; activityIndicator.activityIndicatorViewStyle = UIActivityIndicatorViewStyleWhite; UIView *contentView = [[UIView alloc]initWithFrame:[[UIScreen mainScreen] applicationFrame]]; [contentView addSubview:activityIndicator]; [activityIndicator startAnimating]; // Do the class lengthy task that takes several seconds..... [contentView release]; [activityIndicator release]; I guess I do something wrong when I get the contentView, but how should I get it properly? Thanks for any advices...

    Read the article

  • TFS query mixing Tasks and Bugs, sorted by Priority

    - by Val
    We're using TFS with MSF for Agile 4.2 on a project, and I have a bunch of work to do, both Tasks and Bugs. Both are prioritized by our managers, and assigned due dates and target releases. I use a Work Item query as my main TODO list, and I want to list all the Work Items assigned to me, in order by due date and priority. Problem: I can't seem to find a way to write a unified query that will list both Tasks and Bugs sorted by date and then priority. The problem is that Tasks and Bugs use different fields for Priority. So, my query currently lists the tasks by Due Date, then by Task Priority, then it lists Bugs by Due Date, then by Priority. So, I see tasks that are due later than bugs: Title Due Date Priority Task Priority task1 4/23/2010 Medium task2 4/23/2010 High task3 4/30/2010 Low task4 4/30/2010 Medium bug1 4/23/2010 1 bug2 4/23/2010 2 What I want: Title Due Date Priority Task Priority task1 4/23/2010 Medium task2 4/23/2010 High bug1 4/23/2010 1 bug2 4/23/2010 2 task3 4/30/2010 Low task4 4/30/2010 Medium I don't care if the bugs come before or after the tasks on the same due date; I just want all the work items grouped together by due date, so I never see Tasks for a later due date before Bugs for an earlier one. Another problem is the sorting on Task Priority -- alpha sort means I can't get them to sort by the meaning of the priority. But that's a minor problem I can live with if I can get the Tasks and Bugs intermingled. Any way to do this in a single query?

    Read the article

  • Returning objects from another thread?

    - by Mark
    Trying to follow the hints laid out here, but she doesn't mention how to handle it when your collection needs to return a value, like so: private delegate TValue DequeueHandler(); public virtual TValue Dequeue() { if (dispatcher.CheckAccess()) { --count; var pair = dict.First(); var queue = pair.Value; var val = queue.Dequeue(); if (queue.Count == 0) dict.Remove(pair.Key); OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, val)); return val; } else { dispatcher.BeginInvoke(new DequeueHandler(Dequeue)); } } This obviously won't work, because dispatcher.BeginInvoke doesn't return anything. What am I supposed to do? Or maybe I could replace dequeue with two functions, Peek and Pop, where Peek doesn't really need to be on the UI thread because it doesn't modify anything, right? As a side question, these methods don't need to be "locked" either, do they? If they're all forced to run in the UI thread, then there shouldn't be any concurrency issues, right?

    Read the article

  • Facing error in apple multitasking code

    - by user366584
    Actually I am working for multitasking and facing error please assist me, as I need to work on background application and also in foreground application - (void)applicationDidEnterBackground:(UIApplication *)application { UIApplication* app = [UIApplication sharedApplication]; // Request permission to run in the background. Provide an // expiration handler in case the task runs long. NSAssert(bgTask == UIBackgroundTaskInvalid, nil); bgTask = [app beginBackgroundTaskWithExpirationHandler:^{ // Synchronize the cleanup call on the main thread in case // the task actually finishes at around the same time. dispatch_async(dispatch_get_main_queue(), ^{ if (bgTask != UIBackgroundTaskInvalid) { [app endBackgroundTask:bgTask]; bgTask = UIBackgroundTaskInvalid; } }); }]; // Start the long-running task and return immediately. dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ // Do the work associated with the task. // Synchronize the cleanup call on the main thread in case // the expiration handler is fired at the same time. dispatch_async(dispatch_get_main_queue(), ^{ if (bgTask != UIBackgroundTaskInvalid) { [app endBackgroundTask:bgTask]; bgTask = UIBackgroundTaskInvalid; } }); }); }

    Read the article

  • WCF throttling and emptying the listenBacklog quickly

    - by user1161625
    I'm sort of a WCF newbie (inheriting tasks from someone) and I'm trying to better throttle my application. We have a service that sends print jobs to the Windows print queue and it seems that rather than dumping all of the jobs into the queue it holds on to them in the listenBacklog and slowly releases them to the Windows print queue. Here is my throttling behavior: <behavior name="throttling"> <serviceThrottling maxConcurrentCalls="144" maxConcurrentSessions="1168" /> <serviceDebug httpHelpPageEnabled="true" includeExceptionDetailInFaults="true"/> </behavior> I'm using a custom binding for this service also which is here: <customBinding> <binding name="oneWayPrintBinding" receiveTimeout="00:30:00"> <reliableSession maxPendingChannels="16" inactivityTimeout="01:30:00" /> <binaryMessageEncoding> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" /> </binaryMessageEncoding> <tcpTransport maxReceivedMessageSize="67108864" listenBacklog="1024" /> </binding> </customBinding> Any recommendations would be very helpful. Thank you! -Jim

    Read the article

  • I want to run two or more procedures in parallel

    - by binod gyawali
    I have list of procedures. All procedures are not dependent upon each other. So, I need to do is, to run the independent procedures in parallel. I have 4 procedures that are to be run parallel. When the procedures are run successfully, now I need to go to the next task. These procedures create about 10 tables. Next task is to execute the set of procedures. I have made one table, where I describe the dependency of these procedures to the tables created above. After any one of the above procedures is completed, I should come to this set of procedures, and find out those procedures whose dependency tables are already created. If any procedure whose dependent tables creation is completed, I need to execute this procedure. Running 4 procedures parallel is done by dts. But, difficulty for me is to transfer the task from the above 4 procedures to the below set of procedures. Please help me to complete my task. Thanks in advance

    Read the article

  • Using CSS to insert text

    - by abelenky
    I'm relatively new to CSS, and have used it to change the style and formatting of text. I would now like to use it to insert text as shown below: <span class="OwnerJoe">reconcile all entries</span> Which I hope I could get to show as: Joe's Task: reconcile all entries That is, simply by virtue of being of class "Owner Joe", I want the text Joe's Task: to be displayed. I could do it with code like: <span class="OwnerJoe">Joe's Task:</span> reconcile all entries. But that seems awfully redundant to both specify the class and the text. Is it possible to do what I'm looking for? EDIT One idea is to try to set it up as a ListItem <li> where the "bullet" is the text "Joe's Task". I see examples of how to set various bullet-styles and even images for bullets. Is it possible to use a small block of text for the list-bullet?

    Read the article

  • Help with accessing a pre-existing window AFTER opener is refreshed!

    - by Wilhelm Murdoch
    Alright, I'm at my wit's end on this issue. First, backstory. I'm working on a video management system where we're allowing users, when adding new content, to upload and, optionally, transcode a media file. We're using Java applet for the browser-based FTP client. What I want to do is allow a user to initiate an upload and then send the FTP connection instance to a popup window. This window will act as a job queue for the FTP transfer process. This will allow users to move about the main interface without having to stay on the original page until an individual file transfer is complete. For the most part I have all of this working, but here's a problem. If the window is closed, all connections are dropped and the upload process for all queued files will be canceled. So, if Window One opens the Popup Window, adds stuff to the queue, refreshes the screen or moves to a different page, how will I access the Popup Window? The popup window and its contents must remain persistent while the user navigates through the original window. The original window must be able to access the popup to add a new job to the queue. The popup window itself is independent of the opening window, so communication only happens in one direction: Parent - Popup Not Parent <- Popup Window.open(null, 'WINDOW_NAME'); will not work in this case. I need to check if a window exists BEFORE using window.open. Help!?!?

    Read the article

  • Silverlight - C# and LINQ - Ordering By Multiple Fields

    - by user70192
    Hello, I have an ObservableCollection of Task objects. Each Task has the following properties: AssignedTo Category Title AssignedDate I'm giving the user an interface to select which of these fields they was to sort by. In some cases, a user may want to sort by up to three of these properties. How do I dynamically build a LINQ statement that will allow me to sort by the selected fields in either ascending or descending order? Currently, I'm trying the following, but it appears to only sort by the last sort applied: var sortedTasks = from task in tasks select task; if (userWantsToSortByAssignedTo == true) { if (sortByAssignedToDescending == true) sortedTasks = sortedTasks.OrderByDescending(t => t.AssignedTo); else sortedTasks = sortedTasks.OrderBy(t => t.AssignedTo); } if (userWantsToSortByCategory == true) { if (sortByCategoryDescending == true) sortedTasks = sortedTasks.OrderByDescending(t => t.Category); else sortedTasks = sortedTasks.OrderBy(t => t.Category); } Is there an elegant way to dynamicaly append order clauses to a LINQ statement?

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

  • How to implement message queuing and handling in AWS with NServiceBus

    - by Pete Lunenfeld
    I am creating a new ASP MVC order application in the Amazon (AWS) cloud with the persistence layer at my local datacenter. I will be using the CQRS pattern. The goal of the project is high availability using Queue(s) to store and forward writes (commands/events) that can be picked up and handled asynchronously at my local datacenter. Then, ff the WAN or my local datacenter fails, my cloud MVC app can still take orders and just queue them up until processing can resume. My first thought was to use AWS SQS for the queuing and create my own queue consumer/dispatcher/handler in my own c# application to process the incoming messages/events. MVC (@ Amazon) -- Event/POCO -- SQS -- QueueReader (@ my datacenter) -- DB Then I found NServiceBus. NSB seems to handle lots of details very nicely: message handling, retries, error handling, etc. I hate to reinvent the wheel, and NServiceBus seems like a full featured and mature product that would be perfect for me. But on further research, it does NOT look like NServiceBus is really meant to be used over the WAN in physically separated environments (Cloud to my Datacenter). Google and SO don't really paint a good picture of using NServiceBus across the WAN like I need. How can I use NServiceBus across the WAN? Or is there a better solution to handle queuing and message handling between Amazon an my local datacenter?

    Read the article

  • Worklight console app, update

    - by jarkko
    We're using Worklight 6.1.0.0 / WebSphere 8.0.0.2 (ND/aix). This seemed pretty close to my question too, but for version 6.0. I've successfully done uninstall/install to our worklight console war package. However, there is some extra work on re-deploying adapters and such. I was looking for a way to just update the console. Among the ant tasks there is a target 'minimal-update', which sounds like what I'm looking for (is it?). However when all other pieces fell into place, I have an error for mapping the datasources: ADMA0007E: A validation error occurred in task Mapping resource references to resources. The Java Naming and Directory Interface (JNDI) name is not specified for resource reference jdbc/WorklightDS in module Worklight with EJB name . Contents of the 'minimal-update' task is pretty much the same as for 'install'. I tried that as update from websphere admin console (but i should use the ant task - right?), that gave me a wizard screen to map jdbc/WorklightDS from package to jdbc/WorklightDS on server. This left me wondering how could I tell this using the ant task.

    Read the article

  • java.lang.IllegalArgumentException: View not attached to window manager

    - by alex2k8
    I have an activity that starts AsyncTask and shows progress dialog for the duration of operation. The activity is declared NOT be recreated by rotation or keyboard slide. <activity android:name=".MyActivity" android:label="@string/app_name" android:configChanges="keyboardHidden|orientation" > <intent-filter> </intent-filter> </activity> Once task completed, I dissmiss dialog, but on some phones (framework: 1.5, 1.6) such error is thrown: java.lang.IllegalArgumentException: View not attached to window manager at android.view.WindowManagerImpl.findViewLocked(WindowManagerImpl.java:356) at android.view.WindowManagerImpl.removeView(WindowManagerImpl.java:201) at android.view.Window$LocalWindowManager.removeView(Window.java:400) at android.app.Dialog.dismissDialog(Dialog.java:268) at android.app.Dialog.access$000(Dialog.java:69) at android.app.Dialog$1.run(Dialog.java:103) at android.app.Dialog.dismiss(Dialog.java:252) at xxx.onPostExecute(xxx$1.java:xxx) My code is: final Dialog dialog = new AlertDialog.Builder(context) .setTitle("Processing...") .setCancelable(true) .create(); final AsyncTask<MyParams, Object, MyResult> task = new AsyncTask<MyParams, Object, MyResult>() { @Override protected MyResult doInBackground(MyParams... params) { // Long operation goes here } @Override protected void onPostExecute(MyResult result) { dialog.dismiss(); onCompletion(result); } }; task.execute(...); dialog.setOnCancelListener(new OnCancelListener() { @Override public void onCancel(DialogInterface arg0) { task.cancel(false); } }); dialog.show(); From what I have read (http://bend-ing.blogspot.com/2008/11/properly-handle-progress-dialog-in.html) and seen in Android sources, it looks like the only possible situation to get that exception is when activity was destroyed. But as I have mentioned, I forbid activity recreation for basic events. So any suggestions are very appreciated.

    Read the article

  • Threadpool design question

    - by ZeroVector
    I have a design question. I want some feedback to know if a ThreadPool is appropriate for the client program I am writing. I am having a client running as a service processing database records. Each of these records contains connection information to external FTP sites [basically it is a queue of files to transfer]. A lot of them are to the same host, just moving different files. Therefore, I am grouping them together by host. I want to be able to create a new thread per host. I really don't care when the transfers finish, they just need to do all the work (or try to do) they were assigned, and then terminate once they are finished, cleaning up all resources they used in the process. I anticipate no more than 10-25 connections to be established. Once the transfer queue is empty, the program will simply wait until there are records in the queue again. Is the ThreadPool a good candidate for this or should I use a different approach? Edit: For the most part, this is the only significant custom application running on the server.

    Read the article

  • Where are possible locations of queueing/buffering delays in Linux multicast?

    - by Matt
    We make heavy use of multicasting messaging across many Linux servers on a LAN. We are seeing a lot of delays. We basically send an enormous number of small packages. We are more concerned with latency than throughput. The machines are all modern, multi-core (at least four, generally eight, 16 if you count hyperthreading) machines, always with a load of 2.0 or less, usually with a load less than 1.0. The networking hardware is also under 50% capacity. The delays we see look like queueing delays: the packets will quickly start increasing in latency, until it looks like they jam up, then return back to normal. The messaging structure is basically this: in the "sending thread", pull messages from a queue, add a timestamp (using gettimeofday()), then call send(). The receiving program receives the message, timestamps the receive time, and pushes it in a queue. In a separate thread, the queue is processed, analyzing the difference between sending and receiving timestamps. (Note that our internal queues are not part of the problem, since the timestamps are added outside of our internal queuing.) We don't really know where to start looking for an answer to this problem. We're not familiar with Linux internals. Our suspicion is that the kernel is queuing or buffering the packets, either on the send side or the receive side (or both). But we don't know how to track this down and trace it. For what it's worth, we're using CentOS 4.x (RHEL kernel 2.6.9).

    Read the article

  • how to programatically compare permissions of login/user in sql server 2005

    - by titanium
    There's a login/user in SQL Server who is having a problem importing accounts in production server. I don't have an idea what method he is doing this. According to the one importing, this import is working fine in development server. But when he did the same import in production it is giving him errors. Below are the errors he is getting for each accounts. 2009-06-05 18:01:05.8254 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow has failed 2009-06-05 18:01:05.9035 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow exception: Exception caught while storing Data. [Microsoft][ODBC SQL Server Driver][SQL Server]'ACCOUNT1' is not a valid login or you do not have permission. Please note that 'ACCOUNT1' is not the real account name. I just changed it for security reason. Using SQL Server Management Studio (SSMS), I viewed/checked the permissions of the user/login who is performing the import from development server and production for comparison. I found no difference. My question is: Is there a way to programmatically query permissions in server and database level of a particular login/user so I can compare/contrast for any differences?

    Read the article

  • How can I run an appear effect and a fade effect in scriptaculous? I need them to run in tandam so t

    - by LeeRM
    Hi, Ive been fiddling with this for hours and hours and just cant get it right. First off my sites are already using Prototytpe and Scriptaculous, and to change would take a long time. Basically I am after achieving a slideshow effect similar to jQuery Cycle plugin. I have written most of it but cant get over this hurdle: I need the user to be able to press a control button which will skip the slide to which ever one they have picked. My problem is that if a fade / appear effect is running, then it causes an overlap. I am using queues and they are in their own scope. The problem as I see it is that the fade effect on one slide and the appear effect on the next slide are separate functions. Which means that if the user clicks the control button to move to another slide whilst the animation is inbetween fade & appear, then the next cycle will slot itself in the queue between those 2 effects. The default is to append to the end of the existing queue, which should be fine. But if the appear hasnt been added when a new fade is instantiated, then the queue messes up. I can make it so nothing happens if animation is in effect but thats not the effect I am after. I want to be able to click a slide and whatever is happening to effectively stop and the next slide appear. This is an example of what I am after: http://www.zendesk.com/ Im sorry if that doesnt make sense. Its a tough one to explain. Thanks Lee

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >