Search Results

Search found 7538 results on 302 pages for 'parallel processing'.

Page 22/302 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Dynamically refresh JTextArea as processing occurs?

    - by digiarnie
    I am trying to create a very simple Swing UI that logs information onto the screen via a JTextArea as processing occurs in the background. When the user clicks a button, I want each call to: textArea.append(someString + "\n"); to immediately show up in the UI. At the moment, the JTextArea does not show all log information until the processing has completed after clicking the button. How can I get it to refresh dynamically?

    Read the article

  • cmake and parallel building with "make -jN"

    - by Roman D
    Hi all, I'm trying to setup a parallel CMake-based build for my source tree, but when I issue $ cmake . $ make -j2 I get a jobserver unavailable: using -j1. Add '+' to parent make rule warning. Does anyone have an idea if it is possible to fix it somehow?

    Read the article

  • Extending Eclipse product (parallel developments on a set of codes)

    - by zeroin23
    A spawn off of an existing eclipse product is required for customization for a client. (hence parallel product development) The intention was to use Eclipse Fragment, but "Fragments are additive, they cannot override content found in the host." how can we maintain one set of codes in the svn, yet allow customization by overriding some classes? the current solution is to have a global flag to indicated which product it is and "if" "else" littered everywhere in the codes ...

    Read the article

  • Definition of Connect, Processing, Waiting in apache bench.

    - by rpatel
    When I run apache bench I get results like: Command: abs.exe -v 3 -n 10 -c 1 https://mysite Connection Times (ms) min mean[+/-sd] median max Connect: 203 213 8.1 219 219 Processing: 78 177 88.1 172 359 Waiting: 78 169 84.6 156 344 Total: 281 389 86.7 391 563 I can't seem to find the definition of Connect, Processing and Waiting. What do those numbers mean?

    Read the article

  • parallel sorting methods

    - by davit-datuashvili
    in book algorithm in c++ by robert sedgewick there is such kind of problem how many parallel steps would be required to sort n records that are distributed on some k disks(let say k=1000 or any value ) and using some m processors the same m can be 100 or arbitrary number i have questions what we should do in such case? what are methods to solve such kind of problems? and what is answer in this case?

    Read the article

  • Look up values in a BDB for several files in parallel

    - by biznez
    What is the most efficient way to look up values in a BDB for several files in parallel? If I had a Perl script which did this for one file at a time, would forking/running the process in background with the ampersand in Linux work? How might Hadoop be used to solve this problem? Would threading be another solution?

    Read the article

  • How to get parallel behavior in Java Script ?

    - by Biswanath
    More or less I want to execute two functions in parallel. One way as I see is doing through SetTimeOut function. I have not completely gone through the ReactiveExentension, although it looks promising but may be overkill for my needs. Is there any framework which supports parallelism ? My use case is trivial, but I would like to know if anybody heavily needed parallelism in Java Script ? Thanks, Biswanath.

    Read the article

  • twisted DeferredList parallel Image Download

    - by bell007
    This is code: http://www.dpaste.de/Ij0S/ 1,if there is a erorr (networking erorr,or Unhandled error in Deferred), the code stop. I need this code running until all urls finish request. (May be the parallel function not work? ) 2,when I write image to local filesystem, if meet erorr, there images may not complete. Thanks!

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • .NET not processing an XML file in IIS

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at. If I suffix the page .aspx, of course it works fine.

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • Reminder: For a Complete View Of Your Concurrent Processing Take A Look At The CP Analyzer!

    - by LuciaC
    For a complete view of your Concurrent Processing take a look at the CP Analyzer!  Doc ID 1411723.1 has the script to download and a 9 min video. The Concurrent Processing Analyzer is a Self-Service Health-Check script which reviews the overall Concurrent Processing Footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices.This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome. E-Business Applications Concurrent Processing Analyzer Overview E-Business Applications Concurrent Request Analysis E-Business Applications Concurrent Manager Analysis Identifies Concurrent System Setup and configurations Identifies and recommends Concurrent Best Practices Easy to add Tool for regular Concurrent Maintenance Execute Analysis anytime to compare trending from past outputs Feedback welcome!

    Read the article

  • Asynchronous daemon processing / ORM interaction with Django

    - by perrierism
    I'm looking for a way to do asynchronous data processing with a daemon that uses Django ORM. However, the ORM isn't thread-safe; it's not thread-safe to try to retrieve / modify django objects from within threads. So I'm wondering what the correct way to achieve asynchrony is? Basically what I need to accomplish is taking a list of users in the db, querying a third party api and then making updates to user-profile rows for those users. As a daemon or background process. Doing this in series per user is easy, but it takes too long to be at all scalable. If the daemon is retrieving and updating the users through the ORM, how do I achieve processing 10-20 users at a time? I would use a standard threading / queue system for this but you can't thread interactions like models.User.objects.get(id=foo) ... Django itself is an asynchronous processing system which makes asynchronous ORM calls(?) for each request, so there should be a way to do it? I haven't found anything in the documentation so far. Cheers

    Read the article

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • Send files using Content-Disposition: attachment in parallel

    - by Manos Dilaverakis
    I have a PHP page that sends a file to the browser depending on the request data it receives. getfile.php?get=something sends file A, getfile.php?get=somethingelse sends file B and so on and so forth. This is done like so: header('Content-Disposition: attachment; filename='. urlencode($filename)); readfile($fileURL); It works except it can only send one file at a time. Any other files requested are send in linear fashion. One starts as soon as another finishes. How can I get this to send files in parallel if the user requests another file while one is downloading?

    Read the article

  • Machine Learning Algorithm for Parallel Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK (albeit undesirable) but false negatives are totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

  • DB2 load partitioned data in parallel

    - by Erik Paulson
    I have a 10-node DB2 9.5 database, with raw data on each machine (ie node1:/scratch/data/dataset.1 node2:/scratch/data/dataset.2 ... node10:/scratch/data/dataset.10 There is no shared NFS mount - none of my machines could handle all of the datasets combined. each line of a dataset file is a long string of text, column delimited. The first column is the key. I don't know the hash function that DB2 will use, so dataset is not pre-partitioned. Short of renaming all of my files, is there any way to get DB2 to load them all in parallel? I'm trying to do something like load from '/scratch/data/dataset' of del modified by coldel| fastparse messages /dev/null replace into TESTDB.data_table part_file_location '/scratch/data'; but I have no idea how to suggest to db2 that it should look for dataset.1 on the first node, etc.

    Read the article

  • Parallel CURL function Help .. php

    - by Webby
    Hello.. Firstly let me explain the code below is just a tiny snippet of the code I'm using on the working site. Basically I'm hoping someone can help me rewrite just the function below to enable parallel CURL calls... that way it will fit nicely into the existing code without me having to rewrite the whole from the ground up like some of the samples I've been finding today any ideas? function get_data($url) { $ch = curl_init(); $timeout = 5; curl_setopt($ch,CURLOPT_URL,$url); curl_setopt($ch,CURLOPT_RETURNTRANSFER,1); curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,5); $data = curl_exec($ch); curl_close($ch); return $data; } p.s. $url goes through a huge bunch of urls in a loop already so I'd hole to keep that intact.. Help always appreciated and rewarded

    Read the article

  • class/lib for parallel donwloads on iphone

    - by lope
    Hi there! I need some class or lib that would allow me to run multiple parallel downloads. I didn't have luck finding any, do you guys know about something useful? I need to add new files to download dynamically, so something that only get initialized with urls at the beginning won't help me much. Any help would be greatly appreciated. I tried to create my own, but I had some problems with it. So I figured I would try to get something that is already done.

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • can't install anything ,getting error "Sub-process /usr/bin/dpkg returned an error code (1)"

    - by soum
    i am getting error whenever tring to install or update anything. "Sub-process /usr/bin/dpkg returned an error code (1)" please help me i am just stopped with my ubuntu 11.10. no installation or update. th unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argumenttriggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argumenttriggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Uneditable file and Unreadable(for further processing) file( WHY? ) after processing it through C++

    - by mgj
    Hi...:) This might look to be a very long question to you I understand, but trust me on this its not long. I am not able to identify why after processing this text is not being able to be read and edited. I tried using the ord() function in python to check if the text contains any Unicode characters( non ascii characters) apart from the ascii ones.. I found quite a number of them. I have a strong feeling that this could be due to the original text itself( The INPUT ). Input-File: Just copy paste it into a file "acle5v1.txt" The objective of this code below is to check for upper case characters and to convert it to lower case and also to remove all punctuations so that these words are taken for further processing for word alignment #include<iostrea> #include<fstream> #include<ctype.h> #include<cstring> using namespace std; ifstream fin2("acle5v1.txt"); ofstream fin3("acle5v1_op.txt"); ofstream fin4("chkcharadded.txt"); ofstream fin5("chkcharntadded.txt"); ofstream fin6("chkprintchar.txt"); ofstream fin7("chknonasci.txt"); ofstream fin8("nonprinchar.txt"); int main() { char ch,ch1; fin2.seekg(0); fin3.seekp(0); int flag = 0; while(!fin2.eof()) { ch1=ch; fin2.get(ch); if (isprint(ch))// if the character is printable flag = 1; if(flag) { fin6<<"Printable character:\t"<<ch<<"\t"<<(int)ch<<endl; flag = 0; } else { fin8<<"Non printable character caught:\t"<<ch<<"\t"<<int(ch)<<endl; } if( isalnum(ch) || ch == '@' || ch == ' ' )// checks for alpha numeric characters { fin4<<"char added: "<<ch<<"\tits ascii value: "<<int(ch)<<endl; if(isupper(ch)) { //tolower(ch); fin3<<(char)tolower(ch); } else { fin3<<ch; } } else if( ( ch=='\t' || ch=='.' || ch==',' || ch=='#' || ch=='?' || ch=='!' || ch=='"' || ch != ';' || ch != ':') && ch1 != ' ' ) { fin3<<' '; } else if( (ch=='\t' || ch=='.' || ch==',' || ch=='#' || ch=='?' || ch=='!' || ch=='"' || ch != ';' || ch != ':') && ch1 == ' ' ) { //fin3<<" '; } else if( !(int(ch)>=0 && int(ch)<=127) ) { fin5<<"Char of ascii within range not added: "<<ch<<"\tits ascii value: "<<int(ch)<<endl; } else { fin7<<"Non ascii character caught(could be a -ve value also)\t"<<ch<<int(ch)<<endl; } } return 0; } I have a similar code as the above written in python which gives me an otput which is again not readable and not editable The code in python looks like this: #!/usr/bin/python # -*- coding: UTF-8 -*- import sys input_file=sys.argv[1] output_file=sys.argv[2] list1=[] f=open(input_file) for line in f: line=line.strip() #line=line.rstrip('.') line=line.replace('.','') line=line.replace(',','') line=line.replace('#','') line=line.replace('?','') line=line.replace('!','') line=line.replace('"','') line=line.replace('?','') line=line.replace('|','') line = line.lower() list1.append(line) f.close() f1=open(output_file,'w') f1.write(' '.join(list1)) f1.close() the file takes ip and op at runtime.. as: python punc_remover.py acle5v1.txt acle5v1_op.txt The output of this file is in "acle5v1_op.txt" now after processing this particular output file is needed for further processing. This particular file "aclee5v1_op.txt" is the UNREADABLE Aand UNEDITABLE File that I am not being able to use for further processing. I need this for Word alignment in NLP. I tried readin this output with the following program #include<iostream> #include<fstream> using namespace std; ifstream fin1("acle5v1_op.txt"); ofstream fout1("chckread_acle5v1_op.txt"); ofstream fout2("chcknotread_acle5v1_op.txt"); int main() { char ch; int flag = 0; long int r = 0; long int nr = 0; while(!(fin1)) { fin1.get(ch); if(ch) { flag = 1; } if(flag) { fout1<<ch; flag = 0; r++; } else { fout2<<"Char not been able to be read from source file\n"; nr++; } } cout<<"Number of characters able to be read: "<<r; cout<<endl<<"Number of characters not been able to be read: "<<nr; return 0; } which prints the character if its readable and if not it doesn't print them but I observed the output of both the file is blank thus I could draw a conclusion that this file "acle5v1_op.txt" is UNREADABLE AND UNEDITABLE. Could you please help me on how to deal with this problem.. To tell you a bit about the statistics wrt the original input file "acle5v1.txt" file it has around 3441 lines in it and around 3 million characters in it. Keeping in mind the number of characters in the file you editor might/might not be able to manage to open the file.. I was able to open the file in gedit of Fedora 10 which I am currently using .. This is just to notify you that opening with a particular editor was not actually an issue at least in my case... Can I use scripting languages like Python and Perl to deal with this problem if Yes how? could please be specific on that regard as I am a novice to Perl and Python. Or could you please tell me how do I solve this problem using C++ itself.. Thank you...:) I am really looking forward to some help or guidance on how to go about this problem....

    Read the article

  • SQL SERVER – Speed Up! – Parallel Processes and Unparalleled Performance – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there on two different session. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. Just like doctors I like to call my every attempt to improve the performance of SQL Server queries and database server as a practice too. I have been working with SQL Server for more than 8 years and I believe that many of the performance tuning concept I have mastered. However, performance tuning is not a simple subject. However there are occasions when I feel stumped, there are occasional when I am not sure what should be the next step. When I face situation where I cannot figure things out easily, it makes me most happy because I clearly see this as a learning opportunity. I have been presenting in TechEd India for last three years. This is my fourth time opportunity to present a technical session on SQL Server. Just like every other year, I decided to present something different, something which I have spend years of learning. This time, I am going to present about parallel processes. It is widely believed that more the CPU will improve performance of the server. It is true in many cases. However, there are cases when limiting the CPU usages have improved overall health of the server. I will be presenting on the subject of Parallel Processes and its effects. I have spent more than a year working on this subject only. After working on various queries on multi-CPU systems I have personally learned few things. In coming TechEd session, I am going to share my experience with parallel processes and performance tuning. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get maximum performance out of any query – one has to master various aspects of the parallel processes. In this deep dive session, we will explore this complex subject with a very simple interactive demo. An attendee will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Tellago is still hiring….

    - by gsusx
    Tellago 's SOA practice is rapidly growing and we are still hiring. In that sense, we are looking to for Connected Systems (WCF, BizTalk, WF) experts who are passionate about building game changing solutions with the latest Microsoft technologies. You will be working alongside technology gurus like DonXml , Pablo Cibraro or Dwight Goins . If you are interested and not afraid of working with a bunch of crazy people ;)please drop me a line at jesus dot rodriguez at tellago dot com. Hope to hear from...(read more)

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >