Search Results

Search found 81493 results on 3260 pages for 'file size'.

Page 673/3260 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • Trying to write a loop that uses an OutputStream to write to a text file.

    - by Steve McLain
    I'm not a java programmer, I'm a VB programmer. I am doing this as part of an assignment, however, I'm not asking for help on something assignment related. I'd like to figure out how to get the OutputStreamWriter to work properly in this instance. I just want to capture the values I'm generating and place them into a text document. The file is generated, but only one entry exists, not the 40 I'm expecting. I could do this in a heartbeat with VB, but java feels very strange to me right now. Your help is appreciated. Thanks, Steve Here's the code: public static void main(String[] args){ long start, end; double result,difference; try {//OutputStream code assistance from http://tutorials.jenkov.com/java-io/outputstreamwriter.html OutputStream outputStream = new FileOutputStream("c:\\Temp\\output1.txt"); Writer out = new OutputStreamWriter(outputStream); for(int n=1; n<=20; n++){ //Calculate the Time for n^2. start = System.nanoTime(); //Add code to call method to calculate n^2 result = mN2(n); end = System.nanoTime(); difference = (end - start); //Output results to a file out.write("N^2 End time: " + end + " Difference: " + difference + "\n"); out.close(); } } catch (IOException e){ } try { OutputStream outputStream = new FileOutputStream("c:\\Temp\\output1.txt"); Writer out = new OutputStreamWriter(outputStream); for(int n=1; n<=20; n++){ //Calculate the Time for 2^n. start = System.nanoTime(); //Add code to call method to calculate 2^n result = m2N(n); end = System.nanoTime(); difference = (end - start); //Output results to a file out.write("N^2 End time: " + end + " Difference: " + difference + "\n"); out.close(); } } catch (IOException e){ } } //Calculate N^2 public static double mN2(double n) { n = n*n; return n; } //Calculate 2N public static double m2N (double n) { n = 2*n; return n; }

    Read the article

  • Perl: Edit hyperlinks in nested tags that aren't on seperate lines

    - by user305801
    I have an interesting problem. I wrote the following perl script to recursively loop through a directory and in all html files for img/script/a tags do the following: Convert the entire url to lowercase Replace spaces and %20 with underscores The script works great except when an image tag in wrapped with an anchor tag. Is there a way to modify the current script to also be able to manipulate the links for nested tags that are not on separate lines? Basically if I have <a href="..."><img src="..."></a> the script will only change the link in the anchor tag but skip the img tag. #!/usr/bin/perl use File::Find; $input="/var/www/tecnew/"; sub process { if (-T and m/.+\.(htm|html)/i) { #print "htm/html: $_\n"; open(FILE,"+<$_") or die "couldn't open file $!\n"; $out = ''; while(<FILE>) { $cur_line = $_; if($cur_line =~ m/<a.*>/i) { print "cur_line (unaltered) $cur_line\n"; $cur_line =~ /(^.* href=\")(.+?)(\".*$)/i; $beg = $1; $link = html_clean($2); $end = $3; $cur_line = $beg.$link.$end; print "cur_line (altered) $cur_line\n"; } if($cur_line =~ m/(<img.*>|<script.*>)/i) { print "cur_line (unaltered) $cur_line\n"; $cur_line =~ /(^.* src=\")(.+?)(\".*$)/i; $beg = $1; $link = html_clean($2); $end = $3; $cur_line = $beg.$link.$end; print "cur_line (altered) $cur_line\n"; } $out .= $cur_line; } seek(FILE, 0, 0) or die "can't seek to start of file: $!"; print FILE $out or die "can't print to file: $1"; truncate(FILE, tell(FILE)) or die "can't truncate file: $!"; close(FILE) or die "can't close file: $!"; } } find(\&process, $input); sub html_clean { my($input_string) = @_; $input_string = lc($input_string); $input_string =~ s/%20|\s/_/g; return $input_string; }

    Read the article

  • How do I pipe the Java console output to a file?

    - by Ced
    I found a bug in an application that completely freezes the JVM. The produced stacktrace would provide valuable information for the developers and I would like to retrieve it from the Java console. When the JVM crashes, the console is frozen and I cannot copy the contained text anymore. Is there way to pipe the Java console directly to a file or some other means of accessing the console output of a Java application? Update: I forgot to mention, without changing the code. I am a manual tester. Update 2: This is under Windows XP and it's actually a web start application. Piping the output of javaws jnlp-url does not work (empty file).

    Read the article

  • How can I load a file into a DataBag from within a Yahoo PigLatin UDF?

    - by Cervo
    I have a Pig program where I am trying to compute the minimum center between two bags. In order for it to work, I found I need to COGROUP the bags into a single dataset. The entire operation takes a long time. I want to either open one of the bags from disk within the UDF, or to be able to pass another relation into the UDF without needing to COGROUP...... Code: # **** Load files for iteration **** register myudfs.jar; wordcounts = LOAD 'input/wordcounts.txt' USING PigStorage('\t') AS (PatentNumber:chararray, word:chararray, frequency:double); centerassignments = load 'input/centerassignments/part-*' USING PigStorage('\t') AS (PatentNumber: chararray, oldCenter: chararray, newCenter: chararray); kcenters = LOAD 'input/kcenters/part-*' USING PigStorage('\t') AS (CenterID:chararray, word:chararray, frequency:double); kcentersa1 = CROSS centerassignments, kcenters; kcentersa = FOREACH kcentersa1 GENERATE centerassignments::PatentNumber as PatentNumber, kcenters::CenterID as CenterID, kcenters::word as word, kcenters::frequency as frequency; #***** Assign to nearest k-mean ******* assignpre1 = COGROUP wordcounts by PatentNumber, kcentersa by PatentNumber; assignwork2 = FOREACH assignpre1 GENERATE group as PatentNumber, myudfs.kmeans(wordcounts, kcentersa) as CenterID; basically my issue is that for each patent I need to pass the sub relations (wordcounts, kcenters). In order to do this, I do a cross and then a COGROUP by PatentNumber in order to get the set PatentNumber, {wordcounts}, {kcenters}. If I could figure a way to pass a relation or open up the centers from within the UDF, then I could just GROUP wordcounts by PatentNumber and run myudfs.kmeans(wordcount) which is hopefully much faster without the CROSS/COGROUP. This is an expensive operation. Currently this takes about 20 minutes and appears to tack the CPU/RAM. I was thinking it might be more efficient without the CROSS. I'm not sure it will be faster, so I'd like to experiment. Anyway it looks like calling the Loading functions from within Pig needs a PigContext object which I don't get from an evalfunc. And to use the hadoop file system, I need some initial objects as well, which I don't see how to get. So my question is how can I open a file from the hadoop file system from within a PIG UDF? I also run the UDF via main for debugging. So I need to load from the normal filesystem when in debug mode. Another better idea would be if there was a way to pass a relation into a UDF without needing to CROSS/COGROUP. This would be ideal, particularly if the relation resides in memory.. ie being able to do myudfs.kmeans(wordcounts, kcenters) without needing the CROSS/COGROUP with kcenters... But the basic idea is to trade IO for RAM/CPU cycles. Anyway any help will be much appreciated, the PIG UDFs aren't super well documented beyond the most simple ones, even in the UDF manual.

    Read the article

  • How to generate C# documentation to a CHM or HTML file?

    - by BrunoLM
    Is there a way to generate a readable document file from the documentation on the code directly from Visual Studio? (also considering 2010) If there isn't, what should I use to create a CHM or HTML file? Code example: /// <summary> /// Convert a number to string /// </summary> /// <param name="number">An integer number to be converted to string</param> /// <returns>Number as string</returns> /// <example> /// <code> /// var s = MyMethod(5); /// </code> /// </example> /// <exception cref="Exception">In case it can't convert</exception> /// <remarks> /// Whatever /// </remarks> public string MyMethod(int number) { return number.ToString(); }

    Read the article

  • VSS: How do I recover from "File <foo> has been destroyed, and cannot be rebuilt."?

    - by Eniac
    We're running Visual SourceSafe 6.0 (build 8163). In one project, there's an old label I want to do a Get on, but a few files have been added and destroyed since that label was created. Now everytime I try to do a Get on the label - for each destroyed file - I get the warning "File has been destroyed and cannot be rebuilt, do you want to continue?", (which seems completely stupid, since the destroyed files never existed before the label was set). I've tried adding files with the same name, but that didn't help. I also tried deleting (not destroying) those added files, but that didn't help either. I really want to be rid of the warning, since the home-cooked building app we use to build all the projects doesn't handle this error/warning, and hence can't Get the label requested and build that project. Help! (and no, running VSS is not by choice, trust me, I was hoping never to see it again after the first time I was forced to use it, which was ten years ago)

    Read the article

  • Can't read query string if default index file name is omitted?

    - by Mike
    Is there an issue with IIS or ASP Classic where *Request.ServerVariables("QUERY_STRING")* returns blank if no default file name is given in the URL? On my local developer machine, I can do http://localhost/xslt/?opcs/abc which returns "opcs/abc". However, on our ancient web server, it returns nothing. I have to explicitly give it the default file name in the URL. Like so http://localhost/xslt/default.asp?opcs/abc While nothing too major, it is a little bit of a annoyance. One way I can maybe think of remidying the problem is have Javascript read the URL and return everything after the ?. Unfortunately, I do not know what version of IIS or ASP we are using. Thank you.

    Read the article

  • git hooks - regenerate a file and add it to each commit ?

    - by egarcia
    I'd like to automatically generate a file and add it to a commit if it has changed. Is it possible, if so, what hooks should I use? Context: I'm programming a CSS library. It has several CSS files, and at the end I want to produce a compacted and minimized version. Right now my workflow is: Modify the css files x.css and y.css git add x.css y.css Execute minimize.sh which parses all the css files on my lib, minimizes them and produces a min.css file git add min.css git commit -m 'modified x and y doing foo and bar' I would like to have steps 3 and 4 done automatically via a git hook. Is that possible? I've never used git hooks before. After reading the man page, I think I need to use the @pre-commit@ hook. But can I invoke git add min.css, or will I break the internet?

    Read the article

  • How to utilize network for p2p file sharing on Android Platform?

    - by CSharperWithJava
    I'm working on some apps for the android platform and I have two problems that I'm not quite sure how to approach, and both are closely related. How can I send a relatively small data file from one android device to another (preferably over the internet or directly through wireless network)? Is it possible to create a temporary p2p live data stream from one android device to another? An example application would be to stream low-res video from phone A's camera to phone B, or audio. I would much appreciate being pointed in the right direction on either issue (File transfer or real time data transfer).

    Read the article

  • Unable to execute fetch(PDO::FETCH_ASSOC) and not updating csv file.

    - by Rachel
    // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Is this correct way to do and if yes than why do I get false value for var_dump and than it does not go into while loop and does not write into csv file. Any suggestions ?

    Read the article

  • at what point in the test cycle are the test results written to a file?

    - by jcollum
    I'd like to take the entirety of the test results and publish it to a database. Got the database, got the table, got the script to publish it. Question is, at what point in the ms-test cycle would the results be fully written to the file? And how can I get the path to that file? I'd especially like to grab the "TextMessages" node and put it into my database. I assumed AssemblyCleanup, but the TestContext doesn't seem to be available then.

    Read the article

  • What's the best way to handle web.config file versions in ASP.Net?

    - by MusiGenesis
    I have an ASP.Net web site (ASPX and ASMX pages) with a single web.config file. We have a development version and a production version. Over time, the web.config files for development and production have diverged substantially. What is the best practice for keeping both versions of web.config in source control (we use Tortoise SVN but I don't think that matters)? It seems like I could add the production web.config file with a name like "web.config.prod", and then when we turnover all the files we would just add the step of deleting the existing web.config and renaming web.config.prod to web.config. This seems hackish, although I'm sure it would work. Is there not some mechanism for dealing with this built in to Visual Studio? It seems like this would be a common issue, but I haven't found any questions (with answers) about this.

    Read the article

  • How to search SVN repository for a file when I'm not sure where I put it?

    - by Chris Thornton
    Co-worker is sure he checked in a file: foo_oustanding.dpr but isn't sure when/where (we have lots of "tools" and "utility" ancillary branches, lots of project branches, etc.. I need a way to search the entire repository for this file. I could check the whole source tree out to my HD, but that would take several hours. Is there a faster way? I tried the Repo Browser (Tortoise) and it didn't seem to have a search. I also thought about dumping the log, from the beginning of time. But that seemed silly. I have, at my disposal: Tortoise SVN 1.6 Subversion 1.5.6 running on Apache It runs on a Windows 2003 server. Remote Desktop access to the server, with admin rights. Thanks for any ideas.

    Read the article

  • Is there a .def file equivalent on Linux for controlling exported function names in a shared library

    - by morpheous
    I am building a shared library on Ubuntu 9.10. I want to export only a subset of my functions from the library. On the Windows platform, this would be done using a module definition (.def) file which would contain a list of the external and internal names of the functions exported from the library. I have the following questions: How can I restrict the exported functions of a shared library to those I want (i.e. a .def file equivalent) Using .def files as an example, you can give a function an external name that is different from its internal name (useful for prevent name collisions and also redecorating mangled names etc) On windows I can use the EXPORT command (IIRC) to check the list of exported functions and addresses, what is the equivalent way to do this on Linux?

    Read the article

  • Is it OK to open a DB4o file for query, insert, update multiple times?

    - by Khnle
    This is the way I am thinking of using DB4o. When I need to query, I would open the file, read and close: using (IObjectContainer db = Db4oFactory.OpenFile(Db4oFactory.NewConfiguration(), YapFileName)) { try { List<Pilot> pilots = db.Query<Pilot>().ToList<Pilot>(); } finally { try { db.Close(); } catch (Exception) { }; } } At some later time, when I need to insert, then using (IObjectContainer db = Db4oFactory.OpenFile(Db4oFactory.NewConfiguration(), YapFileName)) { try { Pilot pilot1 = new Pilot("Michael Schumacher", 100); db.Store(pilot1); } finally { try { db.Close(); } catch (Exception) { }; } } In this way, I thought I will keep the file more tidy by only having it open when needed, and have it closed most of the time. But I keep getting InvalidCastException Unable to cast object of type 'Db4objects.Db4o.Reflect.Generic.GenericObject' to type 'Pilot' What's the correct way to use DB4o?

    Read the article

  • Perl: Edit hyperlinks in nested tags that aren't on separate lines

    - by user305801
    I have an interesting problem. I wrote the following perl script to recursively loop through a directory and in all html files for img/script/a tags do the following: Convert the entire url to lowercase Replace spaces and %20 with underscores The script works great except when an image tag in wrapped with an anchor tag. Is there a way to modify the current script to also be able to manipulate the links for nested tags that are not on separate lines? Basically if I have <a href="..."><img src="..."></a> the script will only change the link in the anchor tag but skip the img tag. #!/usr/bin/perl use File::Find; $input="/var/www/tecnew/"; sub process { if (-T and m/.+\.(htm|html)/i) { #print "htm/html: $_\n"; open(FILE,"+<$_") or die "couldn't open file $!\n"; $out = ''; while(<FILE>) { $cur_line = $_; if($cur_line =~ m/<a.*>/i) { print "cur_line (unaltered) $cur_line\n"; $cur_line =~ /(^.* href=\")(.+?)(\".*$)/i; $beg = $1; $link = html_clean($2); $end = $3; $cur_line = $beg.$link.$end; print "cur_line (altered) $cur_line\n"; } if($cur_line =~ m/(<img.*>|<script.*>)/i) { print "cur_line (unaltered) $cur_line\n"; $cur_line =~ /(^.* src=\")(.+?)(\".*$)/i; $beg = $1; $link = html_clean($2); $end = $3; $cur_line = $beg.$link.$end; print "cur_line (altered) $cur_line\n"; } $out .= $cur_line; } seek(FILE, 0, 0) or die "can't seek to start of file: $!"; print FILE $out or die "can't print to file: $1"; truncate(FILE, tell(FILE)) or die "can't truncate file: $!"; close(FILE) or die "can't close file: $!"; } } find(\&process, $input); sub html_clean { my($input_string) = @_; $input_string = lc($input_string); $input_string =~ s/%20|\s/_/g; return $input_string; }

    Read the article

  • how do I detect OS X in my .vimrc file, so certain configurations will only apply to OS X?

    - by Brandon
    I use my .vimrc file on my laptop (OS X) and several servers (Solaris & Linux), and could hypothetically someday use it on a Windows box. I know how to detect unix generally, and windows, but how do I detect OS X? (And for that matter, is there a way to distinguish between Linux and Solaris, etc. And is there a list somewhere of all the strings that 'has' can take? My Google-fu turned up nothing.) For instance, I'd use something like this: if has("mac") " open a file in TextMate from vi: " nmap mate :w<CR>:!mate %<CR> elseif has("unix") " do stuff under linux and " elseif has("win32") " do stuff under windows " endif But clearly "mac" is not the right string, nor are any of the others I tried.

    Read the article

  • Can a war file be deployed on any server?

    - by Roshan
    Please pardon me if this question is silly. Suppose I develop a j2ee web application using srping framework and a MS SQL Server database, using a Webspphere application server. I later create a war file for this application. Can I deploy this war file on a tomcat server without any change in code? Or my question is can this be hosted by web hosting which provides only Tomcat servers? If yes, is there any change in code required? If it cannot be deployed, can you please suggest me what to do, because I havent developed any application on a tomcat server. All the applications that I have developed have been on Websphere App Server using RAD.

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >