Search Results

Search found 24347 results on 974 pages for 'cross process'.

Page 115/974 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Process results of conditional split in SSIS

    - by Robert
    I have a Data Flow Task and am connecting to a database via an OLE DB Source component to extract data. This data feeds into a Conditional Split component to separate the data based on a simple expression. After the evaluation of this expression, the data will end up in either of two locations: LocationA or LocationB. Alright, I have that all set up and working properly. Once the data is separated into these two locations, additional processing is to be done on the records. Here's where I am stuck: I need the the processing of records in LocationA to occur before the processing of records in LocationB. Is there a way to set precedence of which tasks occur before others? If not, what is the best way to handle this? I was thinking I may need to write the data in LocationA and LocationB back out to the database and create a new data flow task in the control flow to handle the order of which these records must be dealt with. Any help is greatly appreciated!

    Read the article

  • sharing build artifacts between jobs in hudson

    - by programming panda
    Hi I'm trying to set up our build process in hudson. Job 1 will be a super fast (hopefully) continuous integration build job that will be built frequently. Job 2, will be responsible for running a comprehensive test suite, at a regular interval or triggered manually. Job 3 will be responsible for running analysis tools across the codebase (much like Job 2). I tried using the "Advanced Projects Options use custom workspace" feature so that code compiled in Job 1 can be used in Job 2 and 3. However, it seems that all build artifacts remain inside that Job 1 workspace. I'm I doing this right? Is there a better way of doing this? I guess I'm looking for something similar to a build pipeline setup...so that things can be shared and the appropriate jobs can be executed in stages. (I also considered using 'batch tasks'...but it seems like those can't be scheduled? only triggered manually?) Any suggestions are welcomed. Thanks!

    Read the article

  • Does anyone else get worn out using Scrum, finishing sprint after sprint?

    - by Simucal
    I'm with a pretty small startup and we started using a form of a Scrum/Agile development cycle. In many ways I enjoy Scrum. We have relatively short sprints (2 weeks) and I like the Burndown Chart to track the teams progress. I also like the Feature Board so I always know what I should be doing next. It feels good taking down a feature's card from the board, completing it and then putting it in the burn down pile. However, we are now entering in our 18th Sprint release cycle and I'm starting to feel a little burnt out. It isn't that I don't like job or my co-workers, it is just that these sprints are... well, sprints. From start to finish I literally feel like I'm racing against the clock to maintain our development velocity. When we are done with the sprint we spend one day planning the next sprints feature set and estimates and then off we go again. For people who work in a mature Agile/Scrum development process, is this normal? Or are we missing something? Is there normally time in a Scrum enviornment that is unassigned/untracked to get done some minor things and to clear your head?

    Read the article

  • Java: Using Comman line arguments to process the names of files

    - by Kat
    I'm a writing a program that will determine the number of lines, characters, and average word length for a text file. For the program, the specifications say that the file or files will be entered as a command line argument and that we should make a TestStatistic object for each file entered. I don't understand how to write the code for making the TestStatistic objects if the user enters more than one file.

    Read the article

  • Learning HTML - The Process.

    - by Gabe
    So, as recommended, I did the W3Schools HTML and XML tutorials this weekend. I understand the basics. Now should I look to get more depth in HTML, or go straight into learning CSS (and try to keep learning html at the same time)? If the first, where should I go for more advanced HTML tutorials?

    Read the article

  • STAThread and Process output capture in c#

    - by alex
    Hi: This is a strange problem I encountered. I have an window application written in c# to do testing. It has a MDI parent form that is hosting a few children forms. One of the forms launch test cripts by creating processes and capture the scripts output to a text box. Another form open serial port and monitoring the status of the device I am working on(like a shell). If I ran both of them together, the output of the script seems only appear in the text box after the test is done. However, If I don't open the serial port form, the output of the script is captured in real time. Does anyone knows what's causing the problem? I notice the onDataReceived even handler for serial port form has a [STAThread] header to it. Will this cause the serial port thread having higher priority than other processes? Thanks in advance.

    Read the article

  • Java: Using Command line arguments to process the names of files

    - by Kat
    I'm a writing a program that will determine the number of lines, characters, and average word length for a text file. For the program, the specifications say that the file or files will be entered as a command line argument and that we should make a TestStatistic object for each file entered. I don't understand how to write the code for making the TestStatistic objects if the user enters more than one file.

    Read the article

  • Far jump in ntdll.dll's internal ZwCreateUserProcess

    - by user49164
    I'm trying to understand how the Windows API creates processes so I can create a program to determine where invalid exes fail. I have a program that calls kernel32.CreateProcessA. Following along in OllyDbg, this calls kernel32.CreateProcessInternalA, which calls kernel32.CreateProcessInternalW, which calls ntdll.ZwCreateUserProcess. This function goes: mov eax, 0xAA xor ecx, ecx lea edx, dword ptr [esp+4] call dword ptr fs:[0xC0] add esp, 4 retn 0x2C So I follow the call to fs:[0xC0], which contains a single instruction: jmp far 0x33:0x74BE271E But when I step this instruction, Olly just comes back to ntdll.ZwCreateUserProcess at the add esp, 4 right after the call (which is not at 0x74BE271E). I put a breakpoint at retn 0x2C, and I find that the new process was somehow created during the execution of add esp, 4. So I'm assuming there's some magic involved in the far jump. I tried to change the CS register to 0x33 and EIP to 0x74BE271E instead of actually executing the far jump, but that just gave me an access violation after a few instructions. What's going on here? I need to be able to delve deeper beyond the abstraction of this ZwCreateUserProcess to figure out how exactly Windows creates processes.

    Read the article

  • In Java, send commands to another command-line program

    - by bradvido
    I am using Java on Windows XP and want to be able to send commands to another program such as telnet. I do not want to simply execute another program. I want to execute it, and then send it a sequence of commands once it's running. Here's my code of what I want to do, but it does not work: (If you uncomment and change the command to "cmd" it works as expected. Please help.) try { Runtime rt = Runtime.getRuntime(); String command = "telnet"; //command = "cmd"; Process pr = rt.exec(command); BufferedReader processOutput = new BufferedReader(new InputStreamReader(pr.getInputStream())); BufferedWriter processInput = new BufferedWriter(new OutputStreamWriter(pr.getOutputStream())); String commandToSend = "open localhost\n"; //commandToSend = "dir\n" + "exit\n"; processInput.write(commandToSend); processInput.flush(); int lineCounter = 0; while(true) { String line = processOutput.readLine(); if(line == null) break; System.out.println(++lineCounter + ": " + line); } processInput.close(); processOutput.close(); pr.waitFor(); } catch(Exception x) { x.printStackTrace(); }

    Read the article

  • Tools to help process Akamai data logs?

    - by dsldsl
    I'm digging through Akamai logs, downloading excel sheets, and then manually joining them so that I can do sorting of data to find top videos and referrers. Are there any tools you know of to help with this kind of processing? I'm looking for something like Urchin used to be for Apache logs, but for Akamai logs. Thanks!

    Read the article

  • WOW64: get x64 %CommonProgramFiles% from 32 bit process

    - by peterchen
    Queries I tried: ExpandEnvironmentStrings("%COMMONPROGRAMFILES%"), GetSpecialPath(CSIDL_PROGRAM_FILES_COMMON). All resolve to (typically) c:\\Program Files (x86)\\Common Files from my 32 bit app. I need to check a file version installed (typically) under c:\\Program Files\\Common Files of a 643 bit application.

    Read the article

  • What is the best way to process XML sent to WCF 3.5

    - by CRM Junkie
    I have to develop a WCF application in 3.5. The input will be sent in the form of XML and the response would be sent in the form of XML as well. A ASP.NET application will be consuming the WCF and sending/receiving data in XML format. Now, as per my understanding, when consuming WCF from an ASP.NET application, we just add a reference to the service, create an object of the service, pack all the necessary data(Data Members in WCF) into the input object (object of the Data Contract) and call the necessary function. It happens that the ASP.NET application is being developed by a separate party and they are hell bent on receiving and sending data in XML format. What I can perceive from this is that the WCF will take the XML string (a single Data Member string type) as input and send out a XML string (again a single Data Member string type) as output. I have created WCF applications earlier where requests and responses were sent out in XML/JSON format when it was consumed by jQuery ajax calls. In those cases, the XML tags were automatically mapped to the different Data Members defined. What approach should I take in this case? Should I just take a string as input (basically the XML string) or is there any way WCF/.NET 3.5 will automatically map the XML tags with the Data Members for requests and responses and I would not need to parse the XML string separately?

    Read the article

  • What is the general process of web hosting?

    - by ggfan
    I want to upload my site public so people can use it. I am currently using a free PHP webhosting company that supports up to a certian amount. When sites that say they offer unlimited upload, data, etc for like $10/month, is that all you need to run a big site? Or how do I host a big site, if it gets popular?

    Read the article

  • How do I process a nested list?

    - by ddbeck
    Suppose I have a bulleted list like this: * list item 1 * list item 2 (a parent) ** list item 3 (a child of list item 2) ** list item 4 (a child of list item 2 as well) *** list item 5 (a child of list item 4 and a grand-child of list item 2) * list item 6 I'd like to parse that into a nested list or some other data structure which makes the parent-child relationship between elements explicit (rather than depending on their contents and relative position). For example, here's a list of tuples containing an item and a list of its children (and so forth): [('list item 1',), ('list item 2', [('list item 3',), [('list item 4', [('list item 5'),]] ('list item 6',)] I've attempted to do this with plain Python and some experimentation with Pyparsing, but I'm not making progress. I'm left with two major questions: What's the strategy I need to employ to make this work? I know recursion is part of the solution, but I'm having a hard time making the connection between this and, say, a Fibonacci sequence. I'm certain I'm not the first person to have done this, but I don't know the terminology of the problem to make fruitful searches for more information on this topic. What problems are related to this so that I can learn more about solving these kinds of problems in general?

    Read the article

  • Need an ASP.NET MVC long running process with user feedback

    - by Jason
    I've been trying to create a controller in my project for delivering what could turn out to be quite complex reports. As a result they can take a relatively long time and a progress bar would certainly help users to know that things are progressing. The report will be kicked off via an AJAX request, with the idea being that periodic JSON requests will get the status and update the progress bar. I've been experimenting with the AsyncController as that seems to be a nice way of running long processes without tying up resources, but it doesn't appear to give me any way of checking on the progress (and seems to block further JSON requests and I haven't discovered why yet). After that I've tried resorting to storing progress in a static variable on the controller and reading the status from that - but to be honest that all seems a bit hacky! All suggestions gratefully accepted!

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • valgrind on server process

    - by Pigol
    hi i am new to valgrind. I know how to run valgrind on executable files from command line. But how do you run valgrind on server processes like apache/myqld/traffic server etc .. I want to run valgrind on traffic server (http://incubator.apache.org/projects/trafficserver.html) to detect some memory leaks taking place in the plugin I have written. Any suggestions ? thanks, pigol

    Read the article

  • Total stack sizes of threads in one process

    - by David
    I use pthreads_attr_getthreadsizes() to get default stack size of one thread, 8MB on my machine. But when I create 8 threads and allocate a very large stack size to them, say hundreds of MB, the program crash. So, I guess, shall ("Number of threads" x "stack size of per thread") shall less than a value(virtual memory size)?

    Read the article

  • fastest (low latency) method for Inter Process Communication between Java and C/C++

    - by Bastien
    Hello, I have a Java app, connecting through TCP socket to a "server" developed in C/C++. both app & server are running on the same machine, a Solaris box (but we're considering migrating to Linux eventually). type of data exchanged is simple messages (login, login ACK, then client asks for something, server replies). each message is around 300 bytes long. Currently we're using Sockets, and all is OK, however I'm looking for a faster way to exchange data (lower latency), using IPC methods. I've been researching the net and came up with references to the following technologies: - shared memory - pipes - queues but I couldn't find proper analysis of their respective performances, neither how to implement them in both JAVA and C/C++ (so that they can talk to each other), except maybe pipes that I could imagine how to do. can anyone comment about performances & feasibility of each method in this context ? any pointer / link to useful implementation information ? thanks for your help

    Read the article

  • Mathematica - Import CSV and process columns?

    - by Casey
    I have a CSV file that is formatted like: 0.0023709,8.5752e-007,4.847e-008 and I would like to import it into Mathematica and then have each column separated into a list so I can do some math on the selected column. I know I can import the data with: Import["data.csv"] then I can separate the columns with this: StringSplit[data[[1, 1]], ","] which gives: {"0.0023709", "8.5752e-007", "4.847e-008"} The problem now is that I don't know how to get the data into individual lists and also Mathematica does not accept scientific notation in the form 8.5e-007. Any help in how to break the data into columns and format the scientific notation would be great. Thanks in advance.

    Read the article

  • Streaming local file from PHP while it's been written to by a CURL process

    - by Fahim
    I am creating a simple Proxy server for my website. Why I am not using mod_proxy and mod_cache is a different discussion. Here's the code: shell_exec("nohup curl --create-dirs -o {$write_path} {$source_url} > /dev/null 2> /dev/null & echo $!"); sleep(1); $read_speed = 65.5; # 65.5 kb/s download rate $handle = fopen($write_path, "rb"); $content_type = select_meta_item($headers, 'Content-Type'); $file_size = select_meta_item($headers, 'Content-Length'); send_headers($content_type, $file_size); flush(); while (!feof($handle)) { echo fread($handle, round($read_speed * 1024)); flush(); sleep(1); } fclose($handle); Streaming an MP3 doesn't work using this method. Plays in Chrome, but not in Firefox. Initially I'll be using this to stream MP3 files through Long Tail's JW Player. If it all works out, I'll also be using this to send ZIP files.

    Read the article

  • How to tell which process is hogging my CPU when they don't add up to 100%?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. The individual processes do not add up to 100%. I try killing lots of processes, but the overall usage continues to be 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, which is never anywhere near 100% CPU unless I'm doing something that requires it (like loading 32 Firefox tabs), after which it goes back to a normal idle level. It's not a new install or anything. There is no reason the processor should be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing every process, one at a time, until the problem went away, and killing vino-server finally fixed it, even though that process never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). But the question remains: How did a single process manage to use 100% CPU while top only showed that process at 5%? How do I identify culprits like this in the future? Looks like I'm not the only one who's had this problem: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >