Search Results

Search found 2563 results on 103 pages for 'batch'.

Page 92/103 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • Script to add user to MediaWiki

    - by Marquis Wang
    I'm trying to write a script that will create a user in MediaWiki, so that I can run a batch job to import a series of users. I'm using mediawiki-1.12.0. I got this code from a forum, but it doesn't look like it works with 1.12 (it's for 1.13) $name = 'Username'; #Username (MUST start with a capital letter) $pass = 'password'; #Password (plaintext, will be hashed later down) $email = 'email'; #Email (automatically gets confirmed after the creation process) $path = "/path/to/mediawiki"; putenv( "MW_INSTALL_PATH={$path}" ); require_once( "{$path}/includes/WebStart.php" ); $pass = User::crypt( $pass ); $user = User::createNew( $name, array( 'password' => $pass, 'email' => $email ) ); $user->confirmEmail(); $user->saveSettings(); $ssUpdate = new SiteStatsUpdate( 0, 0, 0, 0, 1 ); $ssUpdate->doUpdate(); Thanks!

    Read the article

  • Referencing object's identity before submitting changes in LINQ

    - by Axarydax
    Hi, is there a way of knowing ID of identity column of record inserted via InsertOnSubmit beforehand, e.g. before calling datasource's SubmitChanges? Imagine I'm populating some kind of hierarchy in the database, but I wouldn't want to submit changes on each recursive call of each child node (e.g. if I had Directories table and Files table and am recreating my filesystem structure in the database). I'd like to do it that way, so I create a Directory object, set its name and attributes, then InsertOnSubmit it into DataContext.Directories collection, then reference Directory.ID in its child Files. Currently I need to call InsertOnSubmit to insert the 'directory' into the database and the database mapping fills its ID column. But this creates a lot of transactions and accesses to database and I imagine that if I did this inserting in a batch, the performance would be better. What I'd like to do is to somehow use Directory.ID before commiting changes, create all my File and Directory objects in advance and then do a big submit that puts all stuff into database. I'm also open to solving this problem via a stored procedure, I assume the performance would be even better if all operations would be done directly in the database.

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • Using a php://memory wrapper causes errors...

    - by HorusKol
    I'm trying to extend the PHP mailer class from Worx by adding a method which allows me to add attachments using string data rather than path to the file. I came up with something like this: public function addAttachmentString($string, $name='', $encoding = 'base64', $type = 'application/octet-stream') { $path = 'php://memory/' . md5(microtime()); $file = fopen($path, 'w'); fwrite($file, $string); fclose($file); $this->AddAttachment($path, $name, $encoding, $type); } However, all I get is a PHP warning: PHP Warning: fopen() [<a href='function.fopen'>function.fopen</a>]: Invalid php:// URL specified There aren't any decent examples with the original documentation, but I've found a couple around the internet (including one here on SO), and my usage appears correct according to them. Has anyone had any success with using this? My alternative is to create a temporary file and clean up - but that will mean having to write to disc, and this function will be used as part of a large batch process and I want to avoid slow disc operations (old server) where possible. This is only a short file but has different information for each person the script emails.

    Read the article

  • Nhibernate fires SQL commands

    - by Chris
    Hi all, when updating an entity A, NHibernate also send an SQL update command for some other entity B. A and B are not related. Just before saving entity A, the parent of entity B is loaded via a SQLQuery. Then, when accessed, B is lazy loaded (part of a collection). If I save entity A an update statement for entity B is generated as well. How can that be, that when saving an entity, another entity loaded before but is not related to the entity saved, is updated as well?! Can I somehow track where the update comes from? Btw. I am using an save event listener. Could it be that this is always triggered for entity loaded, even though they are not saved explicitly? public class EntitySaveEventListener : NHibernate.Event.Default.DefaultSaveEventListener { protected override object PerformSaveOrUpdate(SaveOrUpdateEvent e) { //auditing return base.PerformSaveOrUpdate(e); } } Update (sorry for providing not enough info): I tracked it down a bit. A select stateement on a entity called address is executed (is it lazy loaded by a parent). Then I create a new entity called Request. Right before saving this entity a session flush is called which updates the address, even though I did not call save or update on the address. Address is a collection within Request. <class name="Request" table="Request"> <bag name="addresses" access="field" cascade="all-delete-orphan" where="IsDeleted = 0"> <key column="RequestId"/> <one-to-many class="Address"/> </bag> ... // address is fetched only NHibernate.SQL: 2010-02-17 11:47:21,306 [21] DEBUG NHibernate.SQL [(null)] - SELECT addresses0_.RequestId as ServiceP8_3_, .... // session flushed here // address is updated NHibernate.SQL: 2010-02-17 11:47:34,306 [21] DEBUG NHibernate.SQL [(null)] - Batch commands: command 0:UPDATE Address SET Street = @p0, ..... Would the address be updated automatically when it is manipulated somehow even though it is not explicitly saved via it's parent (cascade)?

    Read the article

  • Rolling back file moves, folder deletes and mysql queries

    - by Workoholic
    This has been bugging me all day and there is no end in sight. When the user of my php application adds a new update and something goes wrong, I need to be able to undo a complex batch of mixed commands. They can be mysql update and insert queries, file deletes and folder renaming and creations. I can track the status of all insert commands and undo them if an error is thrown. But how do I do this with the update statements? Is there a smart way (some design pattern?) to keep track of such changes both in the file structure and the database? My database tables are MyISAM. It would be easy to just convert everything to InnoDB, so that I can use transactions. That way I would only have to deal with the file and folder operations. Unfortunately, I cannot assume that all clients have InnoDB support. It would also require me to convert many tables in my database to InnoDB, which I am hesitant to do.

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • What is wrong with this recursive Windows CMD script? It won't do Ackermann properly

    - by boost
    I've got this code that I'm trying to get to calculate the Ackermann function so that I can post it up on RosettaCode. It almost works. I thought maybe there'd be a few batch file wizards on StackOverflow. ::echo off set depth=0 :ack if %1==0 goto m0 if %2==0 goto n0 :else set /a n=%2-1 set /a depth+=1 call :ack %1 %n% set t=%errorlevel% set /a depth-=1 set /a m=%1-1 set /a depth+=1 call :ack %m% %t% set t=%errorlevel% set /a depth-=1 if %depth%==0 ( exit %t% ) else ( exit /b %t% ) :m0 set/a n=%2+1 if %depth%==0 ( exit %n% ) else ( exit /b %n% ) :n0 set /a m=%1-1 set /a depth+=1 call :ack %m% %2 set t=%errorlevel% set /a depth-=1 if %depth%==0 ( exit %t% ) else ( exit /b %t% ) I use this script to test it @echo off cmd/c ackermann.cmd %1 %2 echo Ackermann of %1 %2 is %errorlevel% A sample output, for Test 1 1, gives: >test 1 1 >set depth=0 >if 1 == 0 goto m0 >if 1 == 0 goto n0 >set /a n=1-1 >set /a depth+=1 >call :ack 1 0 >if 1 == 0 goto m0 >if 0 == 0 goto n0 >set /a m=1-1 >set /a depth+=1 >call :ack 0 0 >if 0 == 0 goto m0 >set/a n=0+1 >if 2 == 0 (exit 1 ) else (exit /b 1 ) >set t=1 >set /a depth-=1 >if 1 == 0 (exit 1 ) else (exit /b 1 ) >set t=1 >set /a depth-=1 >set /a m=1-1 >set /a depth+=1 >call :ack 0 1 >if 0 == 0 goto m0 >set/a n=1+1 >if 1 == 0 (exit 2 ) else (exit /b 2 ) >set t=2 >set /a depth-=1 >if 0 == 0 (exit 2 ) else (exit /b 2 ) Ackermann of 1 1 is 2

    Read the article

  • SSIS Transaction with Sql Transaction

    - by Mike
    I started with a package to make sure Transactions are working correctly. The package level transaction is set to Required. I have two Execute Sql Task, one deletes rows from a table and one does 1/0, to throw the error. Both task are set to supported transaction level and Serializable IsolationLevel. That works. Now when I replace my two sql task to two separate procedure calls, the first one, ChargeInterest, runs successful but the second one, PaymentProcess, fails always saying. [Execute SQL Task] Error: Executing the query "Exec [proc_xx_NotesReceivable_PaymentProcess] ..." failed with the following error: "Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. PaymentProcess being the second stored procedure. Both procedures have there own BEGIN, COMMIT AND ROLLBACKS inside the SP. I believe that the transactions are being successfully handed in the Charge Interest because I can run the following without issues or the dreaded you started with 0 and now have 1 transaction. EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 --OR GO BEGIN TRAN EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 ROLLBACK TRAN Now I have noticed that DTC does get kicked off in both instances? Why I am not sure because it is using the same connection. In the live example I can see the transaction get started but disappears if I put a breakpoint on the PreExecute event of the second stored procedure. What is the correct way to mingle SP transactions with SSIS transactions?

    Read the article

  • Simplest distributed persistent key/value store that supports primary key range queries

    - by StaxMan
    I am looking for a properly distributed (i.e. not just sharded) and persisted (not bounded by available memory on single node, or cluster of nodes) key/value ("nosql") store that does support range queries by primary key. So far closest such system is Cassandra, which does above. However, it adds support for other features that are not essential for me. So while I like it (and will consider using it of course), I am trying to figure out if there might be other mature projects that implement what I need. Specifically, for me the only aspect of value I need is to access it as a blob. For key, however, I need range queries (as in, access values ordered, limited by start and/or end values). While values can have structures, there is no need to use that structure for anything on server side (can do client-side data binding, flexible value/content types etc). For added bonus, Cassandra style storage (journaled, all sequential writes) seems quite optimal for my use case. To help filter out answers, I have investigated some alternatives within general domain like: Voldemort (key/value, but no ordering) and CouchDB (just sharded, more batch-oriented); and am aware of systems that are not quite distributed while otherwise qualifying (bdb variants, tokyo cabinet itself (not sure if Tyrant might qualify), redis (in-memory store only)).

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • How to generate XML from an Excel VBA macro?

    - by SuperNES
    So, I've got a bunch of content that was delivered to us in the form of Excel spreadsheets. I need to take that content and push it into another system. The other system takes its input from an XML file. I could do all of this by hand (and trust me, management has no problem making me do that!), but I'm hoping there's an easy way to write an Excel macro that would generate the XML I need instead. This seems like a better solution to me, as this is a job that will need to be repeated regularly (we'll be getting a LOT of content in Excel sheets) and it just makes sense to have a batch tool that does it for us. However, I've never experimented with generating XML from Excel spreadsheets before. I have a little VBA knowledge but I'm a newbie to XML. I guess my problem in Googling this is that I don't even know what to Google for. Can anyone give me a little direction to get me started? Does my idea sound like the right way to approach this problem, or am I overlooking something obvious? Thanks StackOverflow!

    Read the article

  • Facebook offline access step-by-step

    - by Ben
    After searchinge litteraly 1 day on fb and google for an up-to-date and working way to do something seemingly simple: I am looking for a step-by-step explanation to get offline_access for a user for a facebook app and then using this (session key) to retrieve offline & not within a browser friends & profile data. Preferrably doing this in the Fb Java API. Thanks. And yes I did check the facebook wiki. Update: Anyone? this: http://www.facebook.com/authorize.php?api_key=<api-key>&v=1.0&ext_perm=offline_access gives me offline_Access, however how to retrieve the session_key? Why can't facebook just do simple documentation, I mean there are like 600 people working there? The seemingly same question: http://stackoverflow.com/questions/617043/getting-offlineaccess-to-work-with-facebook Does not answer how to retrieve the session key Edit: I am still stuck with that. I guess nobody really tried such a batch access out yet...

    Read the article

  • Producing a Form from an Overlay from Reporting Services rdlc Reports

    - by Mike Wills
    I am not sure what you call it in other technologies, on the IBM i (or iSeries) we call it overlays. The overlay is an image of a form that is stored on the server then a program generates the form with fields from the database so you can eliminate preprinted forms. I had a problem last year with the method I was trying at the time. It was a rush job at the time to be revisited at a later point. The work-around at the time was to export to PDF. So now it is "later" and once again is a rush (imagine that). This is all done through a web-based interface. So how do you generate forms from something that was once a preprinted form? What method do you recommend? This is a legal form and must be filled out a certain way and can have many in a batch (up to 50 or so). I would prefer to not have them print one page at a time. Any ideas would be appreciated!

    Read the article

  • Hibernate 3.5.0 causes extreme performance problems

    - by user303396
    I've recently updated from hibernate 3.3.1.GA to hibernate 3.5.0 and I'm having a lot of performance issues. As a test, I added around 8000 entities to my DB (which in turn cause other entities to be saved). These entities are saved in batches of 20 so that the transactions aren't too large for performance reasons. When using hibernate 3.3.1.GA all 8000 entities get saved in about 3 minutes. When using hibernate 3.5.0 it starts out slower than with hibernate 3.3.1. But it gets slower and slower. At around 4,000 entities, it sometimes takes 5 minutes just to save a batch of 20. If I then go to a mysql console and manually type in an insert statement from the mysql general query log, half of them run perfect in 0.00 seconds. And half of them take a long time (maybe 40 seconds) or timeout with "ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction" from MySQL. Has something changed in hibernate's transaction management in version 3.5.0 that I should be aware of? The ONLY thing I changed to experience these unusable performance issues is replace the following hibernate 3.3.1.GA jar files: com.springsource.org.hibernate-3.3.1.GA.jar, com.springsource.org.hibernate.annotations-3.4.0.GA.jar, com.springsource.org.hibernate.annotations.common-3.3.0.ga.jar, com.springsource.javassist-3.3.0.ga.jar with the new hibernate 3.5.0 release hibernate3.jar and javassist-3.9.0.GA.jar. Thanks.

    Read the article

  • Possible to capture all events in a web browser?

    - by David
    I am working on a pet project and am at the research stage. Quick summary I am trying to intercept all form submits, onclick, and every single keydown. My library of choice is either jquery, or jquery + prototypejs. I figure I can batch up the events into a queue/stack and send it back to the server in time interval batches to keep performance relatively stable. Concerns Form submits and change's would be relatively simple.. Something like $("form :inputs").bind("change", function() { ... record event... }); But how to ensure I get precedence over the applications handlers as I have a habit of putting return false on a lot of my form handlers when there is a validation issue. As I understand it, that effectively stops the event in its tracks. My project For my smaller remote clients I will put their products onto a VPS or run it in my home data center. Currently I use a basic authentication system, given a username/password they see the website and then hopefully send me somewhat sane notes on what is broken or should be tweaked. As a better solution, I've written a simple proxy web server that does the above but allows me to have one DNS entry and then depending on credentials it makes an internal request relaying headers and re-writing URLS as needed. Every single html/text or application/* request is compressed and shoved into a sqlite table so I can partially replay what they've done. Now I am shifting to the frontend and would like to capture clicks, keydown's, and submits on everything on the page.

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • SqlCE DB occasionally freezes on one handheld, not another

    - by Michael
    I have two types of custom handhelds which are similar, but slightly different, each running the same WinForm application and a WinCE database: Type 1: WinCE 4.2, 400 mhz, 93244 kb Type 2: WinCE 5.0, 520 mhz, 84208 kb Type 1 will happily proceed through a large batch db operation (initiated) by the app, by Type 2 will consistently begin c-r-a-w-l-i-n-g (for several to many cycles) at around the 200 cycle mark. As several points it will begin running normally and then crawl again. The app does several db op's (inserts, updates and selects, no deletes). To simplify my situation, I've built a small test app which essentially does this: command_s.CommandText = "select dvr from vr where vid = 2211250"; command_u.CommandText = "update pvr set LocationID=81 where Status='OK' and vri = 27861"; while(going) { command_s.ExecuteScalar(); command_u.ExecuteNonQuery(); } and set it off running on the two units side by side. Sure enough, the slower (400 mhz) unit is outpacing the faster (520 mhz) unit (it's about 5000 cycles ahead right now) and I can see noticable pauses on the 520 mhz unit. What is causing this?

    Read the article

  • How to inherit the current path when invoking Maven's exec-maven-plugin?

    - by wishihadabettername
    I have an <exec-maven-plugin> which calls an external command (in this case, svnversion). The command is in the path for the current user. However, when a separate shell is spawned by the plugin, the path is not initialized. I don't want to hardcode or define a variable for each external command (there would be too much to maintain, especially that there are both Windows and *nix users). My pom.xml contains the following: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1</version> <executions> <execution> <id>svnversion-exec</id> <phase>process-resources</phase> <goals> <goal>exec</goal> </goals> <configuration> <executable>svnversion</executable> <arguments> <argument><![CDATA[ >version.txt ]]></argument> </arguments> </configuration> </execution> </executions> </plugin> Currently I get the following output: [INFO] [exec:exec {execution: svnversion-exec}] 'svnversion' is not recognized as an internal or external command, operable program or batch file. [ERROR] BUILD ERROR: Result of cmd.exe /X /C "svnversion >version.txt" execution is: '1'. Thank you!

    Read the article

  • How to find Tomcat's PID and kill it in python?

    - by 4herpsand7derpsago
    Normally, one shuts down Apache Tomcat by running its shutdown.sh script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running shutdown.sh gracefully shuts down some parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: Calls shutdown.sh Runs ps -aef | grep tomcat to find any process with Tomcat referenced If applicable, kills the process with kill -9 <PID> Here's what I've got so far (as a prototype - I'm brand new to Python BTW): #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() I'm struggling with the middle part - with obtaining the grep results, checking to see if their size is greater than 1 (since grep always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the killPidCmd. Thanks in advance!

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • Reload images in a UIWebView after they have been downloaded by a background thread

    - by dantastic
    I have an application that frequently checks in with a server and downloads a batch of articles to the iphone. The articles are in html and just stored using core data. An article has 0-n images on the page. Downloading all associated images at the same time as the text will be too slow and take too much bandwidth. Users are not likely to open every article. If they open an article once it is likely they will open it several times. So I want to download and store the images locally when they are needed. These articles are listed in a UITableView. When you tap an article you pop open a UIWebView that displays the article. I have a function that checks if I have downloaded the images associated with the article already. If I have I just pop open the the UIWebView - everything works fine. If I don't have the images downloaded I go off and download them and store them to my Documents directory. Although this i working, the app is hanging while the images are downloading. Not very tidy. I want the article to open in a snap and download the images with the article open. So what I've done is I check if the images are downloaded, if they aren't I go ahead and just "touch" the files I need and load the webview. The UIWebView opens up but the images referenced contain no data. Then in a background thread I download the images and overwrite the "dummy" ones. This will save the images and everything but it won't reload the images in my current UIWebView. I have to go back out of the article back back in again to see the images. Are there any ways around this? reloading just an image in a UIWebView?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >