Search Results

Search found 2598 results on 104 pages for 'smack my batch up'.

Page 95/104 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • How to make a Stored Procedure that takes in XML and uses that xml as an Update + call this stored p

    - by chobo2
    Hi I am using ms sql server 2005 and I want to do a mass update. I am thinking that I might be able to do it with sending an xml document to a stored procedure. So I seen many examples on how to do it for insert CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData XML) AS INSERT INTO dbo.UserTable(CreateDate) SELECT @UpdatedProdData.value('(/ArrayOfUserTable/UserTable/CreateDate)[1]', 'DATETIME') But I am not sure how it would look like for an update. I am also unsure how do I pass in the xml through ado.net? Do I pass it as a string through a parameter or what? I know sqlDataApater has a batch update method but I am using linq to sql. So I rather keep using it. So if this works I would be able to grab all records with linq to sql and have them as objects. Then manipulate the objects and use xml seralization. Finally I could just use ado.net simple to send the xml to the server. This might be slower then the sqlDataAdapter but I am willing to take that hit if I can keep using objects.

    Read the article

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • Calling NSFetchedResultsController & CoreData experts

    - by JK
    I am having a few nagging issues with NSFetchedResultsController and CoreData, any of which I would be very grateful to get help on. Issue 1 - Updates: I update my store on a background thread which results in certain rows being delete, inserted or updated. The changes are merged into the context on the main thread using the "mergeChangesFromContextDidSaveNotification:" method. Inserts and deletes are updated properly, but updates are not (e.g. the cell label is not updated with the change) although I have confirmed the updates to come through the contextDidSaveNotifcation, exactly like the inserts and deleted. My current workaround is to temporarily change the staleness interval of the context to 0, but this does not seem like the ideal solution. Issue 2 - Deleting objects: My fetch batch size is 20. If an object is deleted by the background thread which is in the first 20 rows, everything works fine. But if the object is after the first 20 rows and the table is scrolled down, a "CoreData could not fulfill a fault" error is raised. I have tried resaving the context and reperforming the frc fetch - all to no avail. Note: In this scenario, the frc delegate method "didChangeObject...." is not called for the delete - I assume this is because the object in question had not been faulted at that time (as it is was outside the initial fetch range). But for some reason, the context still thinks the object is around, although is has been deleted from the store. Issue 3 - Deleting sections : When the deletion of a row leads to the deletion of a section, I have gotten the "invalid number of rows in section???" error. I have worked around this by removing the "reloadSection" line from the NSFetchedResultsChangeMove: section and replacing it with "[tableView insertRowsAtIndexPaths...." This seems to work, but once again, I am not sure if this is the best solution. Any help would be greatly appreciated. Thank you!

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • XML Doc to JSP to TIFF

    - by SPD
    We have around 100 word templates, every time user gets a business request he goes into shared folder select the template he/she wants and enter information and save it as tiff, these tiffs later processed by some batch program. I am trying to automate this process. So I defined an XML which has Template information like <Template id="1"> <Section id="1"> <fieldName id="1">Date</fieldName> <fieldValue></fieldValue> <fieldtype></fieldType> <fieldProperty>textField</fieldProperty> </Section> <Section id="2"> <fieldName id="2">Claim#</fieldName> <fieldValue></fieldValue> <fieldtype></fieldType> <fieldProperty>textField</fieldProperty> </Section> </Template> Based on the template values I generate the JSP on fly. Now I would like to generate a TIFF file out of it in specified format. I am not sure how to handle this requirement. *edited the original question.

    Read the article

  • Obtaining command line arguments in a QT application

    - by morpheous
    The following snippet is from a little app I wrote using the QT framework. The idea is that the app can be run in batch mode (i.e. called by a script) or can be run interactively. It is important therefore, that I am able to parse command line arguments in order to know which mode in which to run etc. [Edit] I am debugging using QTCreator 1.3.1 on Ubuntu Karmic. The arguments are passed in the normal way (i.e. by adding them via the 'Project' settings in the QTCreator IDE). When I run the app, it appears that the arguments are not being passed to the application. The code below, is a snippet of my main() function. int main(int argc, char *argv[]) { //Q_INIT_RESOURCE(application); try { QApplication the_app(argc, argv); //trying to get the arguments into a list QStringList cmdline_args = QCoreApplication::arguments(); // Code continues ... } catch (const MyCustomException &e) { return 1; } return 0; } [Update] I have identified the problem - for some reason, although argc is correct, the elements of argv are empty strings. I put this little code snippet to print out the argv items - and was horrified to see that they were all empty. for (int i=0; i< argc; i++){ std::string s(argv[i]); //required so I can see the damn variable in the debugger std::cout << s << std::endl; } Does anyone know what on earth is going on (or a hammer)?

    Read the article

  • DefaultStyledDocument.styleChanged(Style style) may not run in a timely manner?

    - by Paul Reiners
    I'm experiencing an intermittent problem with a class that extends javax.swing.text.DefaultStyledDocument. This document is being sent to a printer. Most of the time the formatting of the document looks correct, but once in a while it doesn't. It looks like some of the changes in the formatting have not been applied. I took a look at the DefaultStyledDocument.styleChanged(Style style) code: /** * Called when any of this document's styles have changed. * Subclasses may wish to be intelligent about what gets damaged. * * @param style The Style that has changed. */ protected void styleChanged(Style style) { // Only propagate change updated if have content if (getLength() != 0) { // lazily create a ChangeUpdateRunnable if (updateRunnable == null) { updateRunnable = new ChangeUpdateRunnable(); } // We may get a whole batch of these at once, so only // queue the runnable if it is not already pending synchronized(updateRunnable) { if (!updateRunnable.isPending) { SwingUtilities.invokeLater(updateRunnable); updateRunnable.isPending = true; } } } } /** * When run this creates a change event for the complete document * and fires it. */ class ChangeUpdateRunnable implements Runnable { boolean isPending = false; public void run() { synchronized(this) { isPending = false; } try { writeLock(); DefaultDocumentEvent dde = new DefaultDocumentEvent(0, getLength(), DocumentEvent.EventType.CHANGE); dde.end(); fireChangedUpdate(dde); } finally { writeUnlock(); } } } Does the fact that SwingUtilities.invokeLater(updateRunnable) is called, rather than invokeAndWait(updateRunnable), mean that I can't count on my formatting changes appearing in the document before it is rendered? If that is the case, is there a way to ensure that I don't proceed with rendering until the updates have occurred?

    Read the article

  • Browser timing out attempting to load images

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • What are your Programming Falacies/Myths?

    - by pms1969
    I recently started a new job and as is typical of all jobs, if you've left, you get blamed for everything. Not long after I started there was a change required for an app (web based) that we maintain, and it was quickly pointed out that the actual code for this site had been lost a long time ago, and the only changes we could make to the it were ones that required changes to mark-up [it was a pre-compiled site]. Being new, I needed a little help finding my way around the code, and enlisted the services of one of my colleagues. Made my changes, and then re-enlisted his help to deploy it. While prepping for the deployment (getting the app on the QA server) we discovered that there were actually 2 different, very similarly named, folders in our source repository. It transpired that for the last year or so, mark-up changes had been made to the site directly, and these were the only differences with the code in the slightly incorrectly named folder in source control. So we did have all the code, and can now properly support the site. This put me in mind of a trick we played on a junior programmer once in a previous job, where we told him he couldn't/shouldn't do a certain thing in code as this would likely bring the server to it's knees and cost the company thousands of pounds (a gag that last months :-). And another one in the first programming job I took on - the batch commission run was just going to crash once a month and there was nothing to be done about it, causing a call out, and call out compensation for the on-call guy (a bug I fixed as soon as I became the on-call guy - 2am call outs don't work for me). So I was wondering... What other programming fallacies/myths are out there that are worth sharing?

    Read the article

  • Preventing a child process (HandbrakeCLI) from causing the parent script to exit

    - by Chris
    I have a batch conversion script to turn .mkvs of various dimensions into ipod/iphone sized .mp4s, cropping/scaling to suit. Determining the original dimensions, required crop, output file is all working fine. However, on successful completion of the first conversion, HandbrakeCLI causes the parent script to exit. Why would this be? And how can I stop it? The code, as it currently stands: #!/bin/bash find . -name "*.mkv" | while read FILE do # What would the output file be? DST=../Touch/$(dirname "$FILE") MKV=$(basename "$FILE") MP4=${MKV%%.mkv}.mp4 # If it already exists, don't overwrite it if [ -e "$DST/$MP4" ] then echo "NOT overwriting $DST/$MP4" else # Stuff to determine dimensions/cropping removed for brevity HandbrakeCLI --preset "iPhone & iPod Touch" --vb 900 --crop $crop -i "$FILE" -o "$DST/$MP4" > /dev/null 2>&1 if [ $? != 0 ] then echo "$FILE had problems" >> errors.log fi fi done I have additionally tried it with a trap, but that didn't change the behaviour (although the last trap did fire) trap "echo Handbrake SIGINT-d" SIGINT trap "echo Handbrake SIGTERM-d" SIGTERM trap "echo Handbrake EXIT-d" EXIT trap "echo Handbrake 0-d" 0

    Read the article

  • Stop writing blank line at the end of CSV file (using MATLAB)

    - by Grant M.
    Hello all ... I'm using MATLAB to open a batch of CSV files containing column headers and data (using the 'importdata' function), then I manipulate the data a bit and write the headers and data to new CSV files using the 'dlmwrite' function. I'm using the '-append' and 'newline' attributes of 'dlmwrite' to add each line of text/data on a new line. Each of my new CSV files has a blank line at the end, whereas this blank line was not there before when I read in the data ... and I'm not using 'newline' on my final call of 'dlmwrite'. Does anyone know how I can keep from writing this blank line to the end of my CSV files? Thanks for your help, Grant EDITED 5/18/10 1:35PM CST - Added information about code and text file per request ... you'll notice after performing the procedure below that there appears to be a carriage return at the end of the last line in the new text file. Consider a text file named 'textfile.txt' that looks like this: Column1, Column2, Column3, Column4, Column 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 Here's a sample of the code I am using: % import data importedData = importdata('textfile.txt'); % manipulate data importedData.data(:,1) = 100; % store column headers into single comma-delimited % character array (for easy writing later) columnHeaders = importedData.textdata{1}; for counter = 2:size(importedData.textdata,2) columnHeaders = horzcat(columnHeaders,',',importedData.textdata{counter}); end % write column headers to new file dlmwrite('textfile_updated.txt',columnHeaders,'Delimiter','','newline','pc') % append all but last line of data to new file for dataCounter = 1:(size(importedData.data,2)-1) dlmwrite('textfile_updated.txt',importedData.data(dataCounter,:),'Delimiter',',','newline','pc','-append') end % append last line of data to new file, not % creating new line at end dlmwrite('textfile_updated.txt',importedData.data(end,:),'Delimiter',',','-append')

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • newbie hibernate first level cache confusion

    - by Bruce
    Hi all I'm just geting to grips with hibernate. Little bit confused. I just wanted to watch the operation of the first level cache, which I understood to batch up queries until the end of the session. But if I create an object, hibernate saves it immediately, so that when I later update it in the same transaction, it has to do an update too: Session session = factory.getCurrentSession(); session.beginTransaction(); Test1 test1 = new Test1(); test1.setName("Test 1"); test1.setValue(10); // Touch it session.save(test1); System.out.println("At checkpoint 1"); test1.setValue(20); session.getTransaction().commit(); I see the sql for the save, then 'At checkpoint 1', then the sql for the update. Do I have something set up wrong or am I misunderstanding hibernate's first level cache? Is there a good document on the first level cache - I didn't find anything in the hibernate docs, but I could easily have missed it.. Thanks!

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • Hibernate saveOrUpdate fails when I execute it on empty table.

    - by Vladimir
    I'm try to insert or update db record with following code: Category category = new Category(); category.setName('catName'); category.setId(1L); categoryDao.saveOrUpdate(category); When there is a category with id=1 already in database everything works. But if there is no record with id=1 I got following exception: org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1: Here is my Category class setters, getters and constructors ommited for clarity: @Entity public class Category { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String name; @ManyToOne private Category parent; @OneToMany(fetch = FetchType.LAZY, mappedBy = "parent") private List<Category> categories = new ArrayList<Category>(); } In the console I see this hibernate query: update Category set name=?, parent_id=? where id=? So looks like hibernates tryis to update record instead of inserting new. What am I doing wrong here?

    Read the article

  • SharePoint Lists.asmx's UpdateListItems() returns too much data

    - by Philipp Schmid
    Is there a way to prevent the UpdateListItems() web service call in SharePoint's Lists.asmx endpoint from returning all of the fields of the newly created or updated list item? In our case an event handler attached to our custom list is adding some rather large field values which are turned to the client unnecessarily. Is there a way to tell it to only return the ID of the newly created (or updated) list item? For example, currently the web service returns something like this: <Results xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Result ID="1,Update"> <ErrorCode>0x00000000</ErrorCode> <z:row ows_ID="4" ows_Title="Title" ows_Modified="2003-06-19 20:31:21" ows_Created="2003-06-18 10:15:58" ows_Author="3;#User1_Display_Name" ows_Editor="7;#User2_Display_Name" ows_owshiddenversion="3" ows_Attachments="-1" ows__ModerationStatus="0" ows_LinkTitleNoMenu="Title" ows_LinkTitle="Title" ows_SelectTitle="4" ows_Order="400.000000000000" ows_GUID="{4962F024-BBA5-4A0B-9EC1-641B731ABFED}" ows_DateColumn="2003-09-04 00:00:00" ows_NumberColumn="791.00000000000000" xmlns:z="#RowsetSchema" /> </Result> ... </Results> where as I am looking for a trimmed response only containing for example the ows_ID attribute: <Results xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Result ID="1,Update"> <ErrorCode>0x00000000</ErrorCode> <z:row ows_ID="4" /> </Result> ... </Results> I have unsuccessfully looked for a resource that documents all of the valid attributes for both the <Batch> and <Method> tags in he updates XmlNode parameter of UpdateListItems() in the hope that I will find a way to specify the fields to return. A solution for WSS 3.0 would be preferable over an SP 2010-only solution.

    Read the article

  • Packaging Java apps for the Windows/Linux desktop.

    - by alexmcchessers
    I am writing an application in Java for the desktop using the Eclipse SWT library for GUI rendering. I think SWT helps Java get over the biggest hurdle for acceptance on the desktop: namely providing a Java application with a consistent, responsive interface that looks like that belonging to any other app on your desktop. However, I feel that packaging an application is still an issue. OS X natively provides an easy mechanism for wrapping Java apps in native application bundles, but producing an app for Windows/Linux that doesn't require the user to run an ugly batch file or click on a .jar is still a hassle. Possibly that's not such an issue on Linux, where the user is likely to be a little more tech-savvy, but on Windows I'd like to have a regular .exe for him/her to run. Has anyone had any experience with any of the .exe generation tools for Java that are out there? I've tried JSmooth but had various issues with it. Is there a better solution before I crack out Visual Studio and roll my own? Edit: I should perhaps mention that I am unable to spend a lot of money on a commercial solution.

    Read the article

  • Security strategies for storing password on disk

    - by Mike
    I am building a suite of batch jobs that require regular access to a database, running on a Solaris 10 machine. Because of (unchangable) design constraints, we are required use a certain program to connect to it. Said interface requires us to pass a plain-text password over a command line to connect to the database. This is a terrible security practice, but we are stuck with it. I am trying to make sure things are properly secured on our end. Since the processing is automated (ie, we can't prompt for a password), and I can't store anything outside the disk, I need a strategy for storing our password securely. Here are some basic rules The system has multiple users. We can assume that our permissions are properly enforced (ie, if a file with a is chmod'd to 600, it won't be publically readable) I don't mind anyone with superuser access looking at our stored password Here is what i've got so far Store password in password.txt $chmod 600 password.txt Process reads from password.txt when it's needed Buffer overwritten with zeros when it's no longer needed Although I'm sure there is a better way.

    Read the article

  • Python class structure ... prep() method?

    - by Adam Nelson
    We have a metaclass, a class, and a child class for an alert system: class AlertMeta(type): """ Metaclass for all alerts Reads attrs and organizes AlertMessageType data """ def __new__(cls, base, name, attrs): new_class = super(AlertMeta, cls).__new__(cls, base, name, attrs) # do stuff to new_class return new_class class BaseAlert(object): """ BaseAlert objects should be instantiated in order to create new AlertItems. Alert objects have classmethods for dequeue (to batch AlertItems) and register (for associated a user to an AlertType and AlertMessageType) If the __init__ function recieves 'dequeue=True' as a kwarg, then all other arguments will be ignored and the Alert will check for messages to send """ __metaclass__ = AlertMeta def __init__(self, **kwargs): dequeue = kwargs.pop('dequeue',None) if kwargs: raise ValueError('Unexpected keyword arguments: %s' % kwargs) if dequeue: self.dequeue() else: # Do Normal init stuff def dequeue(self): """ Pop batched AlertItems """ # Dequeue from a custom queue class CustomAlert(BaseAlert): def __init__(self,**kwargs): # prepare custom init data super(BaseAlert, self).__init__(**kwargs) We would like to be able to make child classes of BaseAlert (CustomAlert) that allow us to run dequeue and to be able to run their own __init__ code. We think there are three ways to do this: Add a prep() method that returns True in the BaseAlert and is called by __init__. Child classes could define their own prep() methods. Make dequeue() a class method - however, alot of what dequeue() does requires non-class methods - so we'd have to make those class methods as well. Create a new class for dealing with the queue. Would this class extend BaseAlert? Is there a standard way of handling this type of situation?

    Read the article

  • Multithreading for loop while maintaining order

    - by David
    I started messing around with multithreading for a CPU intensive batch process I'm running. Essentially I'm trying to condense multiple single page tiffs into single PDF documents. This works fine with a foreach loop or standard iteration but can be very slow for several 100 page documents. I tried the following based on a some examples I found to use multithreading and it has significant performance improvements however it obliterates the page order instead of 1,2,3,4 it will be 1,3,4,2,6,5 on what thread completes first. My question is how would I utilize this technique while maintaining the page order and if I can will it negate the performance benefit of the multithreading? Thank you in advance. PdfDocument doc = new PdfDocument(); string mail = textBox1.Text; string[] split = mail.Split(new string[] { Environment.NewLine }, StringSplitOptions.None); int counter = split.Count(); // Source must be array or IList. var source = Enumerable.Range(0, 100000).ToArray(); // Partition the entire source array. var rangePartitioner = Partitioner.Create(0, counter); double[] results = new double[counter]; // Loop over the partitions in parallel. Parallel.ForEach(rangePartitioner, (range, loopState) => { // Loop over each range element without a delegate invocation. for (int i = range.Item1; i < range.Item2; i++) { f_prime = split[i].Replace(" " , ""); PdfPage page = doc.AddPage(); XGraphics gfx = XGraphics.FromPdfPage(page); XImage image = XImage.FromFile(f_prime); double x = 0; gfx.DrawImage(image, x, 0); } });

    Read the article

  • How can you determine the file size in JavaScript?

    - by Daniel Lew
    I help moderate a forum online, and on this forum we restrict the size of signatures. At the moment we test this via a simple Greasemonkey script I wrote; we wrap all signatures with a <div>, the script looks for them, and then measures the div's height and width. All the script does right now is make sure the signature resides in a particular height/width. I would like to start measuring the file size of the images inside of a signature automatically so that the script can automatically flag users who are including huge images in their signature. However, I can't seem to find a way to measure the size of images loaded on the page. I've searched and found a property special to IE (element.fileSize) but I obviously can't use that in my Greasemonkey script. Is there a way to find out the file size of an image in Firefox via JavaScript? Edit: People are misinterpreting the problem. The forums themselves do not host images; we host the BBCode that people enter as their signature. So, for example, people enter this: This is my signature, check out my [url=http://google.com]awesome website[/url]! This image is cool! [img]http://image.gif[/img] I want to be able to check on these images via Greasemonkey. I could write a batch script to scan all of these instead, but I'm just wondering if there's a way to augment my current script.

    Read the article

  • updating a column in a table only if after the update it won't be negative and identifying all updat

    - by Azeem
    Hello all, I need some help with a SQL query. Here is what I need to do. I'm lost on a few aspects as outlined below. I've four relevant tables: Table A has the price per unit for all resources. I can look up the price using a resource id. Table B has the funds available to a given user. Table C has the resource production information for a given user (including the number of units to produce everyday). Table D has the number of units ever produced by any given user (can be identified by user id and resource id) Having said that, I need to run a batch job on a nightly basis to do the following: a. for all users, identify whether they have the funds needed to produce the number of resources specified in table C and deduct the funds if they are available from table B (calculating the cost using table A). b. start the process to produce resources and after the resource production is complete, update table D using values from table C after the resource product is complete. I figured the second part can be done by using an UPDATE with a subquery. However, I'm not sure how I should go about doing part a. I can only think of using a cursor to fetch each row, examine and update. Is there a single sql statement that will help me avoid having to process each row manually? Additionally, if any rows weren't updated, the part b. SQL should not produce resources for that user. Basically, I'm attempting to modify the sql being used for this logic that currently is in a stored procedure to something that will run a lot faster (and won't process each row separately). Please let me know any ideas and thoughts. Thanks! - Azeem

    Read the article

  • Facebook / Offline Permission - Trying to perform an action on a set of offline users.

    - by blueigloo
    Hi there, We're building an app which in part of its functionality tries to capture the number of likes associated to a particular video owned by a user. Users of the app are asked for extended off-line access and we capture the key for each user: The format is like this: 2.hg2QQuYeftuHx1R84J1oGg__.XXXX.1272394800-nnnnnn Each user has their offline / infinite key stored in a table in a DB. The object_id which we're interested in is also stored in the DB. At a later stage (offline) we try to run a batch job which reads the number of likes for each user's video. (See attached code) For some reason however, after the first iteration of the loop - which yields the likes correctly, we get a failure with the oh so familiar message: "Session key is invalid or no longer valid" Any insight would be most appreciated. Thanks, B List<DVideo> videoList = db.SelectVideos(); foreach (DVideo video in videoList) { long userId = 0; ConnectSession fbSession = new ConnectSession(APPLICATION_KEY, SECRET_KEY); //session key is attached to the video object for now. fbSession.SessionKey = video.UserSessionKey; fbSession.SessionExpires = false; string fbuid =video.FBUID; long.TryParse(fbuid, out userId); if (userId > 0) { fbSession.UserId = userId; fbSession.Login(); Api fbApi = new Facebook.Rest.Api(fbSession); string xmlQueryResult = fbApi.Fql.Query("SELECT user_id FROM like WHERE object_id = " + video.FBVID); XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(new StringReader(xmlQueryResult)); int likesCount = xmlDoc.GetElementsByTagName("user_id").Count; //Write entry in VideoWallLikes if (likesCount > 0) { db.CountWallLikes(video.ID, likesCount); } fbSession.Logout(); } fbSession = null; }

    Read the article

  • Perl Capture and Modify STDERR before it prints to a file

    - by MicrobicTiger
    I have a perl script which performs multiple external commands and prints the outputs from STDERR and STDOUT to a logfile along with a series of my own print statements to act as documentation on the process. My problem is that the STDERR repeats ~identical prints as example below. I'd like to capture this before it prints and replace with the final result for each of the commands i run. blocks evaluated : 0 blocks evaluated : 10000 blocks evaluated : 20000 blocks evaluated : 30000 ... blocks evaluated : 3420000 blocks evaluated : 3428776 Here's how I'm getting STDOUT and STDERR my $logfile = "Logfile.log"; #log file name #--- Open log file for append if specified --- if ( $logfile ) { open ( OLDOUT, ">&", STDOUT ) or die "ERROR: Can't backup STDOUT location.\n"; close STDOUT; open ( STDOUT, ">", $logfile ) or die "ERROR: Logfile [$logfile] cannot be opened.\n"; } if ( $logfile ) { open ( OLDERR, ">&", STDERR ) or die "ERROR: Can't backup STDERR location.\n"; close STDERR; open ( STDERR, '>&STDOUT' ) or die "ERROR: failed to pass STDERR to STDOUT.\n"; } and closing them close STDERR; open ( STDERR, ">&", OLDERR ) or die "ERROR: Can't fix that first thing you broke!\n"; close STDOUT; open ( STDOUT, ">&", OLDOUT ) or die "ERROR: Can't fix that other thing you broke!\n"; How do I access the STDERR when each print is occurring to do the replace? Or prevent it from printing if it isn't the last of the batch. Many Thanks in advance.

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >