Search Results

Search found 13550 results on 542 pages for 'processing js'.

Page 36/542 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

  • SCCM not processing hardware Inventory

    - by Sreekumar
    We have some workstations that will not import hardware inventory into the SCCM 2007 database. What I've done: Client - verified workstation object is discovered in SCCM - installed ccm client on workstation - manually ran hardware inventory action - verifed Inventory report was sent successfully "Successfully sent report Destination:mp:MP_HinvEndpoint, ID ..... - watched file on client enter and exit temp folder on client. - successfully ran MP Spy to verify client communicates with server - uninstalled client, deleted ccm and ccmsetup folders and reinstalled. Server - no entries in MP_Hinv.log file that coorespond with time stamp of workstation - no entries in dataldr.log file that coorespond with time stamp of workstation Where are these files going? All T/S blogs expect entries in these logs. This is driving me crazy.

    Read the article

  • Red Hat 5.4 slow processing

    - by yucefrizk
    I'm running Red Hat Linux 5.4 on HP DL580 server with 16 processors and 64 GB of RAM. I'm connecting to the server remotely through SSH. after entering the password, it takes time to return the command line, if I click ctrl+c during this time, I'll have the command line prompt but not the correct bash prompt (I have to run bash to pass to my correct prompt). I tried to install Apache on the server, ./configure took 4 hours to finish instead of 1 or two minutes, Oracle installation same behavior. Server Disks are mirrored using RAID controller. any idea what could be the reason of this slowness?

    Read the article

  • Bad HD video deinterlacing processing

    - by Guy Fawkes
    I have Ubuntu 12.04 32-bit with Unity. My system configuration is: CPU: Core 2 Quad Q6600 (2.4 GHz) RAM: 8192 Mb DDR2 Kingston Video: Palit GeForce GTX 260 216 SP, and my screen resolution is 1680x1050. I also have Window 7 Ulitimate installed, and I can see the same files in Media Player Classic without any horizontal lines. I've installed vdpau driver, NVIDIA drivers 304.51, and MPlayer 2 (within SMPlayer). I've disabled "Sync to VBlank" option in CCSM (because in other way, by default, MPlayer process use about 50-60 percents of my processor load), tried to swich between different deinterlace options in SMPlayer, used "-vc ffh264vdpau,ffmpeg12vdpau" (without quotes) parameters for MPlayer, switched to "Ubuntu 2D", but, finally, have no results. Any suggestions? How must I to set up MPlayer?

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • Software that can identify and remove foreground objects from image

    - by Antonio2011a
    I am interested in removing extraneous objects from an image. More specifically the situation is that there is a series of images of a particular background. In each image there are objects in the foreground, however these objects differ across the series of images in terms of their location. Note that some objects always exists in the foreground. The background is static. An example might be a busy tourist spot and you want to remove the people or tourist buses from the image. I'd like the software to take the series of images and as much as possible reconstruct the background. Is there software available that has this capability? If so what are the steps necessary to use that functionality? Similarly if anyone knows a lot about image processing, are there any image processing algorithms that could handle this? Thanks.

    Read the article

  • Minimize HTML, CSS and JS files

    - by karmic
    How do you automatically pack/minimize the HTML, CSS and JS files served on a webpage. More specifically, I wish to have this for a wordpress website. Should it be done at the webserver level (lighttpd), at the application level (wordpress), at the PHP level, or somewhere else?

    Read the article

  • Prevent Linux from processing incoming ICMP Host unreachable packets

    - by bbc
    I have a test setup with one host on a network (10.1.0.0/16) talking via TCP to another one on another network (10.2.0.0/16) and a gateway in the middle. Sometimes, the TCP connection is lost and while scanning the trace (pcap), I looks like it's because of just one ICMP Host unreachable message sent by the gateway to 10.1.0.1 at some point. 10.1.0.1 then sends a TCP RST to 10.2.0.1. In my opinion, the gateway (pfSense) is broken or not configured correctly but anyway, for testing purposes, I'd like to block this kind of ICMP on the host (10.1.0.1) before it has an influence on my TCP connection (or does it? I'm not even sure). I've tried iptables: iptables -I INPUT -i eth0 -p icmp --icmp-type host-unreachable -j DROP but while it does a good job at preventing userpace applications like ping from receiving these ICMP messages, my TCP connection still comes to an end when the alleged "killer ICMP packet" is sent by the gateway. Am I right about how it is processed? If yes, then what can I do to achieve my goal?

    Read the article

  • Receiving and processing SMS messages through a script?

    - by ShankarG
    I am attempting to setup a system to receive and process SMS messages automatically. The system is intended for use in a context (an unfunded migrant workers' union in India) where both finances and sysadmin skills are extremely constrained (I would be the only person, in the near future, who would be administering the system). The intention is to make some functions - registration of members, generation of ID cards, communication of alerts and other information - easier. However, for receiving and sending SMS, I have not been able to find any email to SMS or other kind of gateway that functions in India. Perhaps there is one (edit: apparently Clickatell does have an India service, but the prices appear astronomical). If not, can one rely on a USB mobile modem (such as those provided by many mobile providers in India)? It seems like, with utilities such as gammu or bitpim, SMS operations on such a modem could be scripted. Is this actually feasible, though? Thanks in advance for your thoughts and suggestions. edit: Original first question removed since the two questions had little to do with each other. The original first question has been asked separately here

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • Can't call animated png script from external js file

    - by Tomas
    Hello, I'm trying to put an animated PNG dinamicaly from external .js file. First I found a simple animated png solution, which draws an animation wherever you put the code within tags, but now it looks like I don't know how to call the function properly from external file. The script is from www.squaregoldfish.co.uk/software/animatedpng, and it looks something like this: <script type="text/javascript" src="animatedpng.js"></script> <div id="pnganim" align="center"> <script> fishAnim = new AnimatedPNG('fish', 't01.png', 3, 100); fishAnim.draw(false); </script> </div> Now, I'm trying to call this from external.js file and jquery: function addFish(){ $('#pnganim').html('<script type="text/javascript" src="animatedpng.js" />'); fishAnim = new AnimatedPNG('fish', 'fish01.png', 3, 100); myFish = fishAnim.draw(false); $('#pnganim').append(myFish); } ... and it's not working. After I click a button that calls the addFish function, it opens only the first frame on a blank page. What am I doing wrong here? Thanks

    Read the article

  • Dynamically load Jquery into .js page

    - by RussP
    Please excuse me if I'm simple here, I want to create a simple widget that people can access from their websites - e.g by copy/past something like <script language="javascript" src="test2.js"></script> <div id="test"></div> anywhere in their web pages. where is dynamically filled via Jquery and the functions in/on test2.js. I can do it if JQuery is actually "printed" on the page of test2.js, but I cannot get any JQuery functions to work if I try to include/call JQuery dynamically. How do you call JQuery via javascript and then get it to work with the functions on the page? And/Or is there an easy way to add the <div id="test"></div> dynamically aswell? Sure I can body.append etc. but that only adds at the bottom of the page. Is there a way to .append in the position where the script include <script language="javascript" src="test2.js"></script> is actually placed? Hope I make sence.

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • PocketPC c++ windows message processing recursion problem

    - by user197350
    Hello, I am having a problem in a large scale application that seems related to windows messaging on the Pocket PC. What I have is a PocketPC application written in c++. It has only one standard message loop. while (GetMessage (&msg, NULL, 0, 0)) { { TranslateMessage (&msg); DispatchMessage (&msg); } } We also have standard dlgProc's. In the switch of the dlgProc, we will call a proprietary 3rd party API. This API uses a socket connection to communicate with another process. The problem I am seeing is this: whenever two of the same messages come in quickly (from the user clicking the screen twice too fast and shouldn't be) it seems as though recursion is created. Windows begins processing the first message, gets the api into a thread safe state, and then jumps to process the next (identical ui) message. Well since the second message also makes the API call, the call fails because it is locked. Because of the design of this legacy system, the API will be locked until the recursion comes back out (which also is triggered by the user; so it could be locked the entire working day). I am struggling to figure out exactly why this is happening and what I can do about it. Is this because windows recognizes the socket communication will take time and preempts it? Is there a way I can force this API call to complete before preemption? Is there a way I can slow down the message processing or re-queue the message to ensure the first will execute (capturing it and doing a PostMessage back to itself didnt work). We don't want to lock the ui down while the first call completes. Any insight is greatly appreciated! Thanks!!

    Read the article

  • How to display custom processing message in JQuery datatables

    - by Sukhi
    i am using datatables api to display data in my asp.net4.0 application; datatables I have one column [ Delete ] to delete the row data.when i click on this link i send a jquery ajax request to delete the row from database. I want to display a message such as [ Deleting record... ] to the end user until data deleted by server side processing. I put a div on my page and write a message [ Deleting record... ] in a div when i click on delete link i display that message but when delete operation complete it also display a message [ Processing... ](which is inbuilt message of datatables) which looks like odd as two message are displaying. What can i do better to display message to the end user. JSCode $('#tblVideoList .delete').live('click', function (e) { e.preventDefault(); var oTable = $('#tblVideoList').dataTable(); var aPos = oTable.fnGetPosition(this.parentNode); var aData = oTable.fnGetData(aPos[0]); if (confirm('Are you sure want to delete the record.')) { $("#divDelete").show(); var today = new Date(); $.ajax({ type: "GET", cache: false, url: "samplepage.aspx", success: function (msg) { $("#divDelete").hide(); oTable.fnDraw(); } }); } return false; }); Thanks

    Read the article

  • Use continue or Checked Exceptions when checking and processing objects

    - by Johan Pelgrim
    I'm processing, let's say a list of "Document" objects. Before I record the processing of the document successful I first want to check a couple of things. Let's say, the file referring to the document should be present and something in the document should be present. Just two simple checks for the example but think about 8 more checks before I have successfully processed my document. What would have your preference? for (Document document : List<Document> documents) { if (!fileIsPresent(document)) { doSomethingWithThisResult("File is not present"); continue; } if (!isSomethingInTheDocumentPresent(document)) { doSomethingWithThisResult("Something is not in the document"); continue; } doSomethingWithTheSucces(); } Or for (Document document : List<Document> documents) { try { fileIsPresent(document); isSomethingInTheDocumentPresent(document); doSomethingWithTheSucces(); } catch (ProcessingException e) { doSomethingWithTheExceptionalCase(e.getMessage()); } } public boolean fileIsPresent(Document document) throws ProcessingException { ... throw new ProcessingException("File is not present"); } public boolean isSomethingInTheDocumentPresent(Document document) throws ProcessingException { ... throw new ProcessingException("Something is not in the document"); } What is more readable. What is best? Is there even a better approach of doing this (maybe using a design pattern of some sort)? As far as readability goes my preference currently is the Exception variant... What is yours?

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Sorcery/Capybara: Cannon log in with :js => true

    - by PlankTon
    I've been using capybara for a while, but I'm new to sorcery. I have a very odd problem whereby if I run the specs without Capybara's :js = true functionality I can log in fine, but if I try to specify :js = true on a spec, username/password cannot be found. Here's the authentication macro: module AuthenticationMacros def sign_in user = FactoryGirl.create(:user) user.activate! visit new_sessions_path fill_in 'Email Address', :with => user.email fill_in 'Password', :with => 'foobar' click_button 'Sign In' user end end Which is called in specs like this: feature "project setup" do include AuthenticationMacros background do sign_in end scenario "creating a project" do "my spec here" end The above code works fine. However, IF I change the scenario spec from (in this case) scenario "adding questions to a project" do to scenario "adding questions to a project", :js => true do login fails with an 'incorrect username/password' combination. Literally the only change is that :js = true. I'm using the default capybara javascript driver. (Loads up Firefox) Any ideas what could be going on here? I'm completely stumped. I'm using Capybara 2.0.1, Sorcery 0.7.13. There is no javascript on the sign in page and save_and_open_page before clicking 'sign in' confirms that the correct details are entered into the username/password fields. Any suggestions really appreciated - I'm at a loss.

    Read the article

  • Disable MSBuild output of "Processing /ORDER options..."

    - by Jippers
    The output file from our project build has gone from 6MB to over 75MB in text. Diff'ing the last good build and the first time it blew up, there's a section in the output file like this in the latest: Processing /ORDER options External code objects not listed in the /ORDER file: ?onCallDisconnected@CallStateConnected@CallImpl@space@@UAEXV?$shared_ptr@VCallImpl@space@@@boost@@V?$shared_ptr@VGenericCall@space@@@5@K@Z ; framework.lib(CallStates.obj) ??_DBoolSetting@space@@QAEXXZ ; framework.lib(SettingValueImpl.obj) ...... continues for ~50MB ??$?0U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@?$allocator@U_Node@?$_Tree_nod@V?$_Tmap_traits@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@JU?$less@V?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@@2@V?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@2@$0A@@std@@@std@@@std@@QAE@ABV?$allocator@U?$pair@$$CBV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@J@std@@@1@@Z ; CallStatistics.obj Finished processing /ORDER options I'm not sure how this got in there, but anyone know how to turn it off?

    Read the article

  • Batch processing JDBC

    - by Wai Hein
    I am practicing JDBC batch processing and having errors: error 1: Unsupported feature error 2: Execute cannot be empty or null Property files include: itemsdao.updateBookName = Update Books set bookname = ? where books.id = ? itemsdao.updateAuthorName = Update books set authorname = ? where books.id = ? I know I can execute about DML statements in one update, but I am practicing batch processing in JDBC. Below is my method public void update(Item item) { String query = null; try { connection = DbConnector.getConnection(); property = SqlPropertiesLoader.getProperties("dml.properties"); connection.setAutoCommit(false); if ( property == null ) { Logging.log.debug("dml.properties does not exist. Check property loader or file name is spelled right"); return; } query = property.getProperty("itemsdao.updateBookName"); statement = connection.prepareStatement(query); statement.setString(1, item.getBookName()); statement.setInt(2, item.getId()); statement.addBatch(query); query = property.getProperty("itemsdao.updateAuthorName"); statement = connection.prepareStatement(query); statement.setString(1, item.getAuthorName()); statement.setInt(2, item.getId()); statement.addBatch(query); statement.executeBatch(); connection.commit(); }catch (ClassNotFoundException e) { Logging.log.error("Connection class does not exist", e); } catch (SQLException e) { Logging.log.error("Violating PK constraint",e); } //helper class th finally { DbUtil.close(connection); DbUtil.closePreparedStatement(statement); }

    Read the article

  • Groovy and XML: Not able to insert processing instruction

    - by rhellem
    Scenario Need to update some attributes in an existing XML-file. The file contains a XSL processing instruction, so when the XML is parsed and updated I need to add the instruction before writing it to a file again. Problem is - whatever I do - I'm not able to insert the processing instruction Based on the Java-example found at rgagnon.com I have created the code below Example code ## import groovy.xml.* def xml = '''|<something> | <Settings> | </Settings> |</something>'''.stripMargin() def document = DOMBuilder.parse( new StringReader( xml ) ) def pi = document.createProcessingInstruction('xml-stylesheet', 'type="text/xsl" href="Bp8DefaultView.xsl"'); document.insertBefore(pi, document.documentElement) println document.documentElement Creates output <?xml version="1.0" encoding="UTF-8"?> <something> <Settings> </Settings> </something> What I want <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="Bp8DefaultView.xsl"?> <something> <Settings> </Settings> </something>

    Read the article

  • Assistance with CC Processing script

    - by JM4
    I am currently implementing a credit card processing script, most as provided by the merchant gateway. The code calls functions within a class and returns a string based on the response. The end php code I am using (details removed of course) with example information is: <?php $gw = new gwapi; $gw->setLogin("username", "password"); $gw->setBilling("John","Smith","Acme, Inc.","888","Suite 200", "Beverly Hills", "CA","77777","US","555-555-5555","555-555-5556","[email protected]", "www.example.com"); // "CA","90210","US","[email protected]"); $gw->setOrder("1234","Big Order",1, 2, "PO1234","65.192.14.10"); $r = $gw->doSale("1.00","4111111111111111","1010"); print $gw->responses['responsetext']; ?> where setlogin allows me to login, setbilling takes the sample consumer information, set order takes the order id and description, dosale takes the amount charged, cc number and exp date. when all the variables are sent validated then sent off for processing, a string is returned in the following format: response=1&responsetext=SUCCESS&authcode=123456&transactionid=23456&avsresponse=M&orderid=&type=sale&response_code=100 where: response = transaction approved or declined response text = textual response authcode = transaction authorization code transactionid = payment gateway tran id avsresponse = avs response code orderid = original order id passed in tran request response_code = numeric mapping of processor response I am trying to solve for the following: How do I take the data which is passed back and display it appropriately on the page - If the transaction failed or AVS code doesnt match my liking or something is wrong, an error is displayed to the consumer; if the transaction processed, they are taken to a completion page and the transaction id is sent in SESSION as output to the consumer If the response_code value matches a table of values, certain actions are taken, i.e. if code =100, take to success page, if code = 300 print specific error on original page to customer, etc.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >