Search Results

Search found 8429 results on 338 pages for 'batch processing'.

Page 41/338 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • Is there a way to save "work sessions" in linux

    - by mike
    I regularly work on different projects, using different software. For project 1 I need to open for ex : Filezilla, Gedit and Nautilus (set to a specific folder) For project 2 I need to open foz ex : Gimp, Nautilus (set to another specific folder) etc. What I would like is a kind of sessions manager, where I could create entries "project 1", "project 2", etc. And with one click or command, open all the softwares I need. Perhaps there's an easy way to write a batch file for this? Any idea is welcome :)

    Read the article

  • Launching multiple applications with a single command/script/shortcut

    - by Bill
    I realized a few days ago that every time I sit down at work, I do a few things after unlocking my computer. First, I open up Firefox, then I open up Chrome, then I log in to Digsby. I realized I could probably save repeating this daily by writing a small batch script to open up Firefox and Chrome , but I couldn't figure out how to make it work.. and since the whole effort is to save time I don't want to bash my head around in the windows command prompt to do it. I also tired this in powershell but ran in to a bunch of security nonsense. Is there a way to do this that I am missing? Bonus points if somebody has figured out how to manipulate Digsby via COM , scripting, or python =)

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • PocketPC c++ windows message processing recursion problem

    - by user197350
    Hello, I am having a problem in a large scale application that seems related to windows messaging on the Pocket PC. What I have is a PocketPC application written in c++. It has only one standard message loop. while (GetMessage (&msg, NULL, 0, 0)) { { TranslateMessage (&msg); DispatchMessage (&msg); } } We also have standard dlgProc's. In the switch of the dlgProc, we will call a proprietary 3rd party API. This API uses a socket connection to communicate with another process. The problem I am seeing is this: whenever two of the same messages come in quickly (from the user clicking the screen twice too fast and shouldn't be) it seems as though recursion is created. Windows begins processing the first message, gets the api into a thread safe state, and then jumps to process the next (identical ui) message. Well since the second message also makes the API call, the call fails because it is locked. Because of the design of this legacy system, the API will be locked until the recursion comes back out (which also is triggered by the user; so it could be locked the entire working day). I am struggling to figure out exactly why this is happening and what I can do about it. Is this because windows recognizes the socket communication will take time and preempts it? Is there a way I can force this API call to complete before preemption? Is there a way I can slow down the message processing or re-queue the message to ensure the first will execute (capturing it and doing a PostMessage back to itself didnt work). We don't want to lock the ui down while the first call completes. Any insight is greatly appreciated! Thanks!!

    Read the article

  • How to display custom processing message in JQuery datatables

    - by Sukhi
    i am using datatables api to display data in my asp.net4.0 application; datatables I have one column [ Delete ] to delete the row data.when i click on this link i send a jquery ajax request to delete the row from database. I want to display a message such as [ Deleting record... ] to the end user until data deleted by server side processing. I put a div on my page and write a message [ Deleting record... ] in a div when i click on delete link i display that message but when delete operation complete it also display a message [ Processing... ](which is inbuilt message of datatables) which looks like odd as two message are displaying. What can i do better to display message to the end user. JSCode $('#tblVideoList .delete').live('click', function (e) { e.preventDefault(); var oTable = $('#tblVideoList').dataTable(); var aPos = oTable.fnGetPosition(this.parentNode); var aData = oTable.fnGetData(aPos[0]); if (confirm('Are you sure want to delete the record.')) { $("#divDelete").show(); var today = new Date(); $.ajax({ type: "GET", cache: false, url: "samplepage.aspx", success: function (msg) { $("#divDelete").hide(); oTable.fnDraw(); } }); } return false; }); Thanks

    Read the article

  • Use continue or Checked Exceptions when checking and processing objects

    - by Johan Pelgrim
    I'm processing, let's say a list of "Document" objects. Before I record the processing of the document successful I first want to check a couple of things. Let's say, the file referring to the document should be present and something in the document should be present. Just two simple checks for the example but think about 8 more checks before I have successfully processed my document. What would have your preference? for (Document document : List<Document> documents) { if (!fileIsPresent(document)) { doSomethingWithThisResult("File is not present"); continue; } if (!isSomethingInTheDocumentPresent(document)) { doSomethingWithThisResult("Something is not in the document"); continue; } doSomethingWithTheSucces(); } Or for (Document document : List<Document> documents) { try { fileIsPresent(document); isSomethingInTheDocumentPresent(document); doSomethingWithTheSucces(); } catch (ProcessingException e) { doSomethingWithTheExceptionalCase(e.getMessage()); } } public boolean fileIsPresent(Document document) throws ProcessingException { ... throw new ProcessingException("File is not present"); } public boolean isSomethingInTheDocumentPresent(Document document) throws ProcessingException { ... throw new ProcessingException("Something is not in the document"); } What is more readable. What is best? Is there even a better approach of doing this (maybe using a design pattern of some sort)? As far as readability goes my preference currently is the Exception variant... What is yours?

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Disable MSBuild output of "Processing /ORDER options..."

    - by Jippers
    The output file from our project build has gone from 6MB to over 75MB in text. Diff'ing the last good build and the first time it blew up, there's a section in the output file like this in the latest: Processing /ORDER options External code objects not listed in the /ORDER file: ?onCallDisconnected@CallStateConnected@CallImpl@space@@UAEXV?$shared_ptr@VCallImpl@space@@@boost@@V?$shared_ptr@VGenericCall@space@@@5@K@Z ; framework.lib(CallStates.obj) ??_DBoolSetting@space@@QAEXXZ ; framework.lib(SettingValueImpl.obj) ...... continues for ~50MB ??$?0U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@?$allocator@U_Node@?$_Tree_nod@V?$_Tmap_traits@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@JU?$less@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@@2@V?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@2@$0A@@std@@@std@@@std@@QAE@ABV?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@1@@Z ; CallStatistics.obj Finished processing /ORDER options I'm not sure how this got in there, but anyone know how to turn it off?

    Read the article

  • Assistance with CC Processing script

    - by JM4
    I am currently implementing a credit card processing script, most as provided by the merchant gateway. The code calls functions within a class and returns a string based on the response. The end php code I am using (details removed of course) with example information is: <?php $gw = new gwapi; $gw->setLogin("username", "password"); $gw->setBilling("John","Smith","Acme, Inc.","888","Suite 200", "Beverly Hills", "CA","77777","US","555-555-5555","555-555-5556","[email protected]", "www.example.com"); // "CA","90210","US","[email protected]"); $gw->setOrder("1234","Big Order",1, 2, "PO1234","65.192.14.10"); $r = $gw->doSale("1.00","4111111111111111","1010"); print $gw->responses['responsetext']; ?> where setlogin allows me to login, setbilling takes the sample consumer information, set order takes the order id and description, dosale takes the amount charged, cc number and exp date. when all the variables are sent validated then sent off for processing, a string is returned in the following format: response=1&responsetext=SUCCESS&authcode=123456&transactionid=23456&avsresponse=M&orderid=&type=sale&response_code=100 where: response = transaction approved or declined response text = textual response authcode = transaction authorization code transactionid = payment gateway tran id avsresponse = avs response code orderid = original order id passed in tran request response_code = numeric mapping of processor response I am trying to solve for the following: How do I take the data which is passed back and display it appropriately on the page - If the transaction failed or AVS code doesnt match my liking or something is wrong, an error is displayed to the consumer; if the transaction processed, they are taken to a completion page and the transaction id is sent in SESSION as output to the consumer If the response_code value matches a table of values, certain actions are taken, i.e. if code =100, take to success page, if code = 300 print specific error on original page to customer, etc.

    Read the article

  • Groovy and XML: Not able to insert processing instruction

    - by rhellem
    Scenario Need to update some attributes in an existing XML-file. The file contains a XSL processing instruction, so when the XML is parsed and updated I need to add the instruction before writing it to a file again. Problem is - whatever I do - I'm not able to insert the processing instruction Based on the Java-example found at rgagnon.com I have created the code below Example code ## import groovy.xml.* def xml = '''|<something> | <Settings> | </Settings> |</something>'''.stripMargin() def document = DOMBuilder.parse( new StringReader( xml ) ) def pi = document.createProcessingInstruction('xml-stylesheet', 'type="text/xsl" href="Bp8DefaultView.xsl"'); document.insertBefore(pi, document.documentElement) println document.documentElement Creates output <?xml version="1.0" encoding="UTF-8"?> <something> <Settings> </Settings> </something> What I want <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="Bp8DefaultView.xsl"?> <something> <Settings> </Settings> </something>

    Read the article

  • Batch file that returns folder size

    - by Paul Wall
    Hi, I'm having space issues on my Vista machine and need to figure out what's taking up so much space. I would like to write a simple batch file that returns all folders under C: and the size of each folder. The dir command doesn't appear to return folder size. Unfortunately we don't have admin rights and can't install a third party application and we have other users in our group that also need this information. Thanks...

    Read the article

  • Run Configuration Batch Script

    - by Karl
    For the purpose of a demonstration I'm looking for a way to start an eclipse run configuration from outside using a script (e.g. windows batch script). I have no idea if that's possible or how to do it.. Does anybody have an idea?

    Read the article

  • batch update mysql table

    - by Yang
    I have a table with a column called time, "HH:MM:SS". How can I do a batch update so that the value increment by 1 hour? Is it something like: update <table_name> set <time_column> = <time_column> + 3600 where ...

    Read the article

  • Changing hibernate batch size programatically

    - by user179056
    Hello, Is possible to change hibernate.jdbc.batch_size programmatically? I understand hibernate.jdbc.batch_size is a application level parameter, wanted to know if i can use it specifically for certain HQL inserts and not others. I would change the code only for those HQL inserts The big picture is that i need to introduce batch inserts to make the web application performant in some scenarios, but i do not want to jeopardize the normal inserts which work right now. Any pointers would help thanks Sameer

    Read the article

  • Batch save in CastleProject ActiveRecord

    - by Alex
    I need to save thousand of records in a database. I am using CastleProject ActiveRecord. The cycle which stores that amount of objects works too long. Is it possible to run saving in a batch using ActiveRecord? What is recommended way to improve performance?

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • I need to take the first three letters of a filename and set it into a text file, in a certain fashion. How can I do this?

    - by JuniorD
    Okay, so I need to take the first three letters of a file from a list of files, and place this into a text file in a certain manner. I will provide examples below. Lets say that I have two file names in the same directory, one called cougar.txt and the other bear.txt. These are in the animals directory. I need to take the first three letters of these words, and transpose them into a text file along with the directory, in the following format: BEA = "animals/bear.txt" COU = "animals/cougar.txt" This should happen with any random thing that might be in the list. I'm fairly new to this sort of coding, so I'm not quite sure which language to use, and I'm learning as I go. This new challenge seems fairly daunting to me, and I would be much appreciated if you guys could help. Also, I'm using Windows 7. Been attempting at this all day, to no avail. Preferably done in batch, but if that is impossible I'm open to recommendations.

    Read the article

  • Command line switching

    - by Larry
    I have read through some suggestions but I am just not technical enough to get this I think. I am a CAD designer and each file has 5 files associated with it. I have 3 sets of 5 files, and each set needs to go into its own zip file, placed on a separate server. For example: "C:\Program Files\7-zip\7z.exe" a file1.zip "O:\server2\map files\BC\BC.d*"-0 "C:\Program Files\7-zip\7z.exe" a file2.zip "O:\server2\map files\BC\ON.d*"-0 "C:\Program Files\7-zip\7z.exe" a file3.zip "O:\server2\map files\BC\AB.d*"-0 and I am in directory "S:\server\map files\provinces" (for example). These lines run within an existing batch file and by the time it reaches the 3 lines above, it's in the S: directory sample above. So it's looking on my pc for the 7-zip program, creating 3 zip file names which it does, but places those zip files on a separate server which it doesn't and the first zip file also includes all the other 10 files, the second zip file the same plus the first zip file, and the third the same with the other two zip files making me think the code isn't recognizing the part after file1.zip where I am trying to tell it what files to include and where to place the zip files. Ultimately, I want to either have the system create a new zip file if the old one was deleted, or copy the new files into the existing zip and overwrite any older files, and for these zip files to be placed in a separate location which is where we share our files with other personnel from within our company. The S: drive is for all originals, and O: is for sharing. Is there a list of all switching options with many different samples?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >