Search Results

Search found 1657 results on 67 pages for 'writes on'.

Page 49/67 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Android: writing a file to sdcard

    - by Sumit M Asok
    I'm trying to write a file from an Http post reply to a file on the sdcard. Everything works fine until the byte array of data is retrieved. I've tried setting WRITE_EXTERNAL_STORAGE permission in the manifest and tried many different combinations of tutorials I found on the net. All I could find was using the openFileOutput("",MODE_WORLD_READABLE) method, of the activity but how my app writes file is by using a thread. Specifically, a thread is invoked from another thread when a file has to be written, so giving an activity object didn't work even though I tried it. The app has come a long way and I cannot change how the app is currently written. Please, someone help me? CODE: File file = new File(bgdmanip.savLocation); FileOutputStream filecon = null; filecon = new FileOutputStream(file); // bgdmanip.savLocation holds the whole files path byte[] myByte; myByte = Base64Coder.decode(seReply); Log.d("myBytes", String.valueOf(myByte)); bos.write(myByte); filecon.write(myByte); myvals = x * 11024; seReply is a string reply from HttpPost response. the second set of code is looped with reference to x. the file is created but remains 0 bytes

    Read the article

  • Write asynchronously to file in perl

    - by Stefhen
    Basically I would like to: Read a large amount of data from the network into an array into memory. Asynchronously write this array data, running it thru bzip2 before it hits the disk. repeat.. Is this possible? If this is possible, I know that I will have to somehow read the next pass of data into a different array as the AIO docs say that this array must not be altered before the async write is complete. I would like to background all of my writes to disk in order as the bzip2 pass is going to take much longer than the network read. Is this doable? Below is a simple example of what I think is needed, but this just reads a file into array @a for testing. use warnings; use strict; use EV; use IO::AIO; use Compress::Bzip2; use FileHandle; use Fcntl; my @a; print "loading to array...\n"; while(<>) { $a[$. - 1] = $_; } print "array loaded...\n"; my $aio_w = EV::io IO::AIO::poll_fileno, EV::WRITE, \&IO::AIO::poll_cb; aio_open "./out", O_WRONLY || O_NONBLOCK, 0, sub { my $fh = shift or die "error while opening: $!\n"; aio_write $fh, undef, undef, $a, -1, sub { $_[0] > 0 or die "error: $!\n"; EV::unloop; }; }; EV::loop EV::LOOP_NONBLOCK;

    Read the article

  • Ruby - Immutable Objects

    - by Chris Bunch
    I've got a highly multithreaded app written in Ruby that shares a few instance variables. Writes to these variables are rare (1%) while reads are very common (99%). What is the best way (either in your opinion or in the idiomatic Ruby fashion) to ensure that these threads always see the most up-to-date values involved? Here's some ideas so far that I had (although I'd like your input before I overhaul this): Have a lock that most be used before reading or writing any of these variables (from Java Concurrency in Practice). The downside of this is that it puts a lot of synchronize blocks in my code and I don't see an easy way to avoid it. Use Ruby's freeze method (see here), although it looks equally cumbersome and doesn't give me any of the synchronization benefits that the first option gives. These options both seem pretty similar but hopefully anyone out there will have a better idea (or can argue well for one of these ideas). I'd also be fine with making the objects immutable so they aren't corrupted or altered in the middle of an operation, but I don't know Ruby well enough to make the call on my own and this question seems to argue that objects are highly mutable.

    Read the article

  • Post Loading ads - using appendChild to move an IFRAME with text link ads

    - by Prem
    I have changed code around to basically load an add the bottom of the page in a hidden div and attached an onload event handler that called document.getElementById(xxx).appendChild() to take the hidden ad and move it into the right spot in my page. This works GREAT.. however when the ad is a text ad it AFTER i move the ad there is nothing in the rendered Iframe. I did tests to see what it looks like before i move it and sure enough the text links load in the IFRAME but the second i do the appendChild call to move the div that contains the ad i seem to loose the contents of the Iframe. Any ideas whats going on <div id="myad" style="display: none;"> GA_googleFillSlot("MyADSlotName"); </div> <script> window.onload = function() { // leader board document.getElementById('adplaceholder').appendChild(document.getElementById('myAd')); document.getElementById('myAd').style.display = ''; </script> UPDATE: I think what the problem here is that on text ads google writes to the iframe directly inserting the relevant text links where are on other ads it uses the iframe to just point to some src. seems like when i do the appendchild, if there is no "src" set for the iframe after the copy is done the iframe in the new location contains nothing... guess it does a reload on the src? Any way around this??

    Read the article

  • How to Increase the time till a read timeout error occurs?

    - by Alex
    Hi all. I've written in PHP a script that takes a long time to execute [Image processing for thousands of pictures]. It's a meter of hours - maybe 5. After 15 minutes of processing, I get the error: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: The URL which I clicked Read Timeout The system returned: [No Error] A Timeout occurred while waiting to read data from the network. The network or server may be down or congested. Please retry your request. Your cache administrator is webmaster. What I need is to enable that script to run for much longer. Now, here are all the technical info: I'm writing in PHP and using the Zend Framework. I'm using Firefox. The long script that is processed is done after clicking a link. Obviously, since the script is not over I see the web page on which the link was and the web browser writes "waiting for ...". After 15 minutes the error occurs. I tried to make changes to Firefox threw about:config but without any success. I don't know, but the changes might be needed somewhere else. So, any ideas? Thanks ahead.

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook "Permission Denied"

    - by Andy R
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • iText PDFReader Extremely Slow To Open

    - by Wbmstrmjb
    I have some code that combines a few pages of acro forms (with acrofields in tact) and then at the end writes some JS to the entire document. It is the PdfReader in the function adding the JS that is taking extremely long to instantiate (about 12 seconds for a 1MB file). Here is the code (pretty simple): public static byte[] AddJavascript(byte[] document, string js) { PdfReader reader = new PdfReader(new RandomAccessFileOrArray(document), null); MemoryStream msOutput = new MemoryStream(); PdfStamper stamper = new PdfStamper(reader, msOutput); PdfWriter writer = stamper.Writer; writer.AddJavaScript(js); stamper.Close(); reader.Close(); byte[] withJS = msOutput.GetBuffer(); return withJS; } I have benchmarked the above and the line that is slow is the first one. I have tried reading it from a file instead of memory and tried using a MemoryStream instead of the RandomAccessFileOrArray. Nothing makes it any faster. If I add JS to a single page document, it is very fast. So my thought is that the code that combines the pages is somehow making the PDF slow to read for the PdfReader. Here is the combine code: public static byte[] CombineFiles(List<byte[]> sourceFiles) { MemoryStream output = new MemoryStream(); PdfCopyFields copier = new PdfCopyFields(output); try { output.Position = 0; foreach (var fileBytes in sourceFiles) { PdfReader fileReader = new PdfReader(fileBytes); copier.AddDocument(fileReader); } } catch (Exception exception) { //throw } finally { copier.Close(); } byte[] concat = output.GetBuffer(); return concat; } I am using PdfCopyFields because I need to preserve the form fields and so cannot use the PdfCopy or PdfSmartCopy. This combine code is very fast (few ms) and produces working documents. The AddJS code above is called after it and the PdfReader open is the slow piece. Any ideas?

    Read the article

  • configure batch to sent minute info instead of entire stdout

    - by Daniel
    Hi all, I am working on a RedHat server along with several other users. We use the batch utility to set a job queue. Some of the programs that I use write stuff to stdout during the run, with info on who much data has been processed to far and estimated time until completion etc. batch -q z at> myScript -i somefile -o someotherfile By default, the batch util send an email to me (since I configured it using .forward) with the entire output from stdout. Since the scripts writes something to stdout a few times each second, the amount of log-stuff I get from a two-day script can be ˜20 Mbs. Clearly not what I want. I can of course pipe stdout to a file like so batch -q z at> myScript -i somefile -o someotherfile > myscript.stdout.log but then I get a blank e-mail from the util. So to my question: Is it possible to configure batch so that it sends time the job started and ended, or run time or some oth valuable information to me, instead of a 20 Mb mail or a blank mail? Note that the scripts that I use are binaries and I cannot customize the code to output less info in the first place (which would be the optimal solution I guess). Thanks /Daniel

    Read the article

  • IIS7 ASP.NET Session drops in seconds

    - by shxo
    For testing I have 1 isolated page - no masters, controls, …. My sessions are lost after about 30 seconds. I’ve tried setting timeout on the page itself, in web.config, both, and neither. Tried forms authentication with timeout and windows authentication. Recycle the AppPool after changes. I can response.write from the Session_Start , but I never get any response.writes from the Session_End. Some things I’ve tried: <sessionState mode="InProc" stateConnectionString="tcpip=127.0.0.1:42424" sqlConnectionString="data source=127.0.0.1;" cookieless="false" timeout="20" /> <sessionState mode="InProc" cookieless="false" timeout="20"/> <sessionState mode="InProc" timeout="20"/> <sessionState timeout="20"/> No luck. My runtime is set to: <httpRuntime useFullyQualifiedRedirectUrl="true" maxRequestLength="204800" requestLengthDiskThreshold="204800" executionTimeout="600" /> I don’t know what this would be relevant, but I can’t think of anything else to post! Thanks!

    Read the article

  • How can I simulate this application hang scenario?

    - by Pwninstein
    I have a Windows Forms app that itself launches different threads to do different kinds of work. Occasionally, ALL threads (including the UI thread) become frozen, and my app becomes unresponsive. I've decided it may be a Garbage Collector-related issue, as the GC will freeze all managed threads temporarily. To verify that just managed threads are frozen, I spin up an unmanaged one that writes to a "heartbeat" file with a timestamp every second, and it is not affected (i.e. it still runs): public delegate void ThreadProc(); [DllImport("UnmanagedTest.dll", EntryPoint = "MyUnmanagedFunction")] public static extern void MyUnmanagedFunction(); [DllImport("kernel32")] public static extern IntPtr CreateThread( IntPtr lpThreadAttributes, uint dwStackSize, IntPtr lpStartAddress, IntPtr lpParameter, uint dwCreationFlags, out uint dwThreadId); uint threadId; ThreadProc proc = new ThreadProc(MyUnmanagedFunction); IntPtr functionPointer = Marshal.GetFunctionPointerForDelegate(proc); IntPtr threadHandle = CreateThread(IntPtr.Zero, 0, functionPointer, IntPtr.Zero, 0, out threadId); My Question is: how can I simulate this situation, where all managed threads are suspended but unmanaged ones keep on spinning? My first stab: private void button1_Click(object sender, EventArgs e) { Thread t = new Thread(new ThreadStart(delegate { new Hanger(); GC.Collect(2, GCCollectionMode.Forced); })); t.Start(); } class Hanger{ private int[] m_Integers = new int[10000000]; public Hanger() { } ~Hanger() { Console.WriteLine("About to hang..."); //This doesn't reproduce the desired behavior //while (true) ; //Neither does this //Thread.Sleep(System.Threading.Timeout.Infinite); } } Thanks in advance!!

    Read the article

  • JSP or .ascx equivalent for Scala?

    - by Daniel Worthington
    I'm working on a small MVC "framework" (it's really very small) in Scala. I'd like to be able to write my view files as Scala code so I can get lots of help from the compiler. Pre-compiling is great, but what I really want is a way to have the servlet container automatically compile certain files (my view files) on request so I don't have to shut down Jetty and compile all my source files at once, then start it up again just to see small changes to my HTML. I do this a lot with .ascx files in .NET (the file will contain just one scriptlet tag with a bunch of C# code inside which writes out markup using an XmlWriter) and I love this workflow. You just make changes and then refresh your browser, but it's still getting compiled! I don't have a lot of experience with Java, but it seems possible to do this with JSP as well. I'm wondering if this sort of thing is possible in Scala. I have looked into building this myself (see more info here: http://www.nabble.com/Compiler-API-td12050645.html) but I would rather use something else if it's out there.

    Read the article

  • How to test IO code in JUnit?

    - by add
    I'm want to test two services: service which builds file name service which writes some data into file provided by 1st service In first i'm building some complex file structure (just for example {user}/{date}/{time}/{generatedId}.bin) In second i'm writing data to the file passed by first service (1st service calls 2nd service) How can I test both services using mocks without making any real IO interractions? Just for example: 1st service: public class DefaultLogService implements LogService { public void log(SomeComplexData data) { serializer.write(new FileOutputStream(buildComplexFileStructure()), data); or serializer.write(buildComplexFileStructure(), data); or serializer.write(new GenericInputEntity(buildComplexFileStructure()), data); } private ComplextDataSerializer serializer; // mocked in tests } 2nd service: public class DefaultComplexDataSerializer implements ComplexDataSerializer { void write(InputStream stream, SomeComplexData data) {...} or void write(File file, SomeCompexData data) {...} or void write(GenericInputEntity entity, SomeComplexData data) {...} } In first case i need to pass FileOutputStream which will create a file (i.e. i can't test 1st service) In second case i need to pass File. What can i do in 2nd service test if I need to test data which will be written to specified file? (i can't test 2nd service) In third case i think i need some generic IO object which will wrap File. Maybe there is some ready-to-use solution for this purpose?

    Read the article

  • Create ABPerson Records in a Shared CardDAV (10.6/Server Hosted) AddressBook

    - by Woodster
    Hello, My app presently reads and writes to the local Mac OS X 10.6 client addressbook using the AddressBook.framework. It works fine. 10.6 Server introduced AddressBook Server, which 10.6 clients can connect to by setting up a CardDAV Account. User and Group records can be stored in that account, which is synchronized to the 10.6 server and made available to other clients who access the same CardDAV account. Mail.app is able to autocomplete the email addresses from accounts that are in the local datastore as well as the remote CardDAV datastore. ABPeoplePicker can see both. But, programatically, I'm not getting any CardDAV-based data returned from my queries against the shared AddressBook. I'm not sure if I need to ask it for a different AddressBook, or if I need to modify my fetch-request to indicate that I want it to be able to use the shared data too. My goal is to adapt the current code so that it can read/write to the CardDAV account too, instead of just the local addressbook. Thoughts?

    Read the article

  • Automating ftp command line application redirecting I/O in .Net

    - by SoMoS
    Hello, I'm trying to automate the ftp client that Windows includes redirecting the I/O from the process. What I'm doing is starting the process from my application and trying to read what the client prints on the screen and sending my commands to it. The problem is that I can not read almost any data sent by the ftp client. Some data is present but most data is not read. That's the code I have until now. Public Sub Start() process = New Diagnostics.Process() process.StartInfo.FileName = "ftp.exe" #'' The command is on the path process.StartInfo.CreateNoWindow = True process.StartInfo.RedirectStandardInput = True process.StartInfo.RedirectStandardOutput = True process.StartInfo.UseShellExecute = False process.Start() process.StandardInput.AutoFlush = True process.BeginOutputReadLine() End Sub #'' takes data from the stdout Private Sub process_OutputDataReceived(ByVal sender As Object, ByVal e As System.Diagnostics.DataReceivedEventArgs) Handles process.OutputDataReceived #'' At this moment here there is code to show the stdout at a textbox End Sub #'' sends data to stdin Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click process.StandardInput.WriteLine(Me.TextEdit1.Text) End Sub Now when I execute this for example and send ? I just get the first line (and I should get a lot more). Or when I send the open command I should receive an A but nothing is received. Any ideas? Another question is ... when a console applications writes on the screen it always does that by writing at the stdout or the stderr isn't it?

    Read the article

  • CSS - Class hierarchies???

    - by ClarkeyBoy
    Hi, I have a site with a customer front end, which has a catalogue, homepage, contact page, about us page and so on. There is also an administration front end. I would like to implement a kind of hierarchy where any elements within an element with class "admin" will inherit properties set in the admin stylesheet and anything else inherits from the customer stylesheet. The purpose of this is so that admin can login on the admin front end, where they have access to lots of advanced stuff, but they can also navigate to the customer front end where they can execute basic tasks (such as hiding catalogue items, running a debug script if a customer reports an issue and so on). I would like all the admin tools on the customer front end to have properties taken from the admin stylesheet instead of the customer one - this will change the background colour and stuff. Is there any easy way to set up like namespaces to make things simpler, for example: .admin { .list { .list-subtitle { } .list-item { } } a { } } .customer { .list { .list-subtitle { } .list-item { } } a { } } I know it can be like: .admin .list {} .admin .list .list-item {} .admin a I just dont want to have to keep putting .admin all the time. Does anyone have any suggestions on how I could do this? I suppose I could write a .net class which sets this up and writes a stylesheet according to whats put into it, but then I would not be able to read the styles so easily add there would be all sorts of like Classes.Add(blah) and so on. Thanks in advance for any replies... Regards, Richard

    Read the article

  • COM access to classic ASP intrinsic objects

    - by wrench
    I'm converting a VB6 COM object that works with classic ASP to a c# .Net COM Object Interop_COMSVCS.ObjectContext objContext; Interop_COMSVCS.AppServer objAppServer; objAppServer = null; // need to initialize before using objAppServer = new Interop_COMSVCS.AppServer(); objContext = objAppServer.GetObjectContext(); oApplication = (Interop_ASP.Application)objContext["Application"]; oSession = (Interop_ASP.Session)objContext["Session"]; oResponse = (Interop_ASP.Response)objContext["Response"]; oRequest = (Interop_ASP.Request)objContext["Request"]; oSession works to store local information to.from ASP storage oResponse can do simple writes to the browser BUT any code like oRequest.Cookies["sessionId"] or oResponse.Cookies["sessionId"] doesn't provide any sort of read or write access. Any cast or conversion I trry to do tells me I'm dealing with an empty or null System Object. There doesn't seem to be any sort of syntax to get/set the cookie collection. With COM+ I've seesn soem articles that indcate a switch for Access to ASP Intrinsic Objects -- that seesm to describe my issue, but I'd rather not use COM+. There are also some articles that indicate if I was using ASP.NET I could use HttpContext and HttpRequest/Response, but that's a completely different set of data objects that don't seem to be available with classic ASP. I've been stuck on this fopr a few days. Any help appreciated.

    Read the article

  • Lost Update Anomaly in Sql Server Update Command

    - by Javed
    Hi, I am very much confused. I have a transaction in ReadCommitted Isolation level. Among other things I am also updating a counter value in it, something similar to below: Update tblCount set counter = counter + 1 My application is a desktop application and this transaction happens to occur quite frequently and concurrently. We recently noticed an error that sometimes the counter value doesn't get updated or is missed. We also insert one record on each counter update so we are sure that records have been inserted but somehow counter fails to update. This happens once in 2000 simulaneous transactions. I seriously doubt it is a lost update anomaly I am facing but if you look at the command above, it's just update the counter from its own value: if I have started a transaction and the transaction has reached this statement, it should have locked the row. This should not cause lost update, but it's happening somehow. Is the thing that this update command works in two parts? Like first it reads the counter value (during which it doesn't get the exclusive lock) and then writes the new calculated value (when it does get an exclusive lock)? Please help, I have got really confused.

    Read the article

  • Output redirection still with colors in PowerShell

    - by stej
    Suppose I run msbuild like this: function Clean-Sln { param($sln) MSBuild.exe $sln /target:Clean } Clean-Sln c:\temp\SO.sln In Posh console the output is in colors. That's pretty handy - you spot colors just by watching the output. And e.g. not important messages are grey. Question I'd like to add ability to redirect it somewhere like this (simplified example): function Clean-Sln { param($sln) MSBuild.exe $sln /target:Clean | Redirect-AccordingToRedirectionVariable } $global:Redirection = 'Console' Clean-Sln c:\temp\SO.sln $global:Redirection = 'TempFile' Clean-Sln c:\temp\Another.sln If I use 'Console', the cmdlet/function Redirect-AccordingToRedirectionVariable should output the msbuild messages with colors the same way as the output was not piped. In other words - it should leave the output as it is. If I use 'TempFile', Redirect-AccordingToRedirectionVariable will store the output in a temp file. Is it even possible? I guess it is not :| Or do you have any advice how to achieve the goal? Possible solution: if ($Redirection -eq 'Console) { MSBuild.exe $sln /target:Clean | Redirect-AccordingToRedirectionVariable } else { MSBuild.exe $sln /target:Clean | Out-File c:\temp.txt } But if you imagine there can be many many msbuild calls, it's not ideal. Don't be shy to tell me any new suggestion how to cope with it ;) Any background info about redirections/coloring/outpu is welcome as well. (The problem is not msbuild specific, the problem touches any application that writes colored output)

    Read the article

  • Spam proof hit counter in Django

    - by Jim Robert
    I already looked at the most popular Django hit counter solutions and none of them seem to solve the issue of spamming the refresh button. Do I really have to log the IP of every visitor to keep them from artificially boosting page view counts by spamming the refresh button (or writing a quick and dirty script to do it for them)? More information So right now you can inflate your view count with the following few lines of Python code. Which is so little that you don't even really need to write a script, you could just type it into an interactive session: from urllib import urlopen num_of_times_to_hit_page = 100 url_of_the_page = "http://example.com" for x in range(num_of_times_to_hit_page): urlopen(url_of_the_page) Solution I'll probably use To me, it's a pretty rough situation when you need to do a bunch of writes to the database on EVERY page view, but I guess it can't be helped. I'm going to implement IP logging due to several users artificially inflating their view count. It's not that they're bad people or even bad users. See the answer about solving the problem with caching... I'm going to pursue that route first. Will update with results. For what it's worth, it seems Stack Overflow is using cookies (I can't increment my own view count, but it increased when I visited the site in another browser.) I think that the benefit is just too much, and this sort of 'cheating' is just too easy right now. Thanks for the help everyone!

    Read the article

  • fatal error C1075: end of file found before the left brace and read and write files not working

    - by user320950
    could someone also tell me if the actions i am attempting to do which are stated in the comments are correct or not. i am new at c++ and i think its correct but i have doubts #include<iostream> #include<fstream> #include<cstdlib> #include<iomanip> using namespace std; int main() { ifstream in_stream; // reads itemlist.txt ofstream out_stream1; // writes in items.txt ifstream in_stream2; // reads pricelist.txt ofstream out_stream3;// writes in plist.txt ifstream in_stream4;// read recipt.txt ofstream out_stream5;// write display.txt float price=' ',curr_total=0.0; int wrong=0; int itemnum=' '; char next; in_stream.open("ITEMLIST.txt", ios::in); // list of avaliable items if( in_stream.fail() )// check to see if itemlist.txt is open { wrong++; cout << " the error occured here0, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n" << endl; exit(1); } else{ cout << " System ran correctly " << endl; out_stream1.open("listWititems.txt", ios::out); // list of avaliable items if(out_stream1.fail() )// check to see if itemlist.txt is open { wrong++; cout << " the error occured here1, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit(1); } else{ cout << " System ran correctly " << endl; } in_stream2.open("PRICELIST.txt", ios::in); if( in_stream2.fail() ) { wrong++; cout << " the error occured here2, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } out_stream3.open("listWitdollars.txt", ios::out); if(out_stream3.fail() ) { wrong++; cout << " the error occured here3, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } in_stream4.open("display.txt", ios::in); if( in_stream4.fail() ) { wrong++; cout << " the error occured here4, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } out_stream5.open("showitems.txt", ios::out); if( out_stream5.fail() ) { wrong++; cout << " the error occured here5, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } in_stream.close(); // closing files. out_stream1.close(); in_stream2.close(); out_stream3.close(); in_stream4.close(); out_stream5.close(); system("pause"); in_stream.setf(ios::fixed); while(in_stream.eof()) { in_stream >> itemnum; cin.clear(); cin >> next; } out_stream1.setf(ios::fixed); while (out_stream1.eof()) { out_stream1 << itemnum; cin.clear(); cin >> next; } in_stream2.setf(ios::fixed); in_stream2.setf(ios::showpoint); in_stream2.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (in_stream2 >> itemnum >> price) // gets itemnum and price { while (in_stream2.eof()) // reads file to end of file { in_stream2 >> itemnum; in_stream2 >> price; price++; curr_total= price++; in_stream2 >> curr_total; cin.clear(); // allows more reading cin >> next; } } } out_stream3.setf(ios::fixed); out_stream3.setf(ios::showpoint); out_stream3.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (out_stream3 << itemnum << price) { while (out_stream3.eof()) // reads file to end of file { out_stream3 << itemnum; out_stream3 << price; price++; curr_total= price++; out_stream3 << curr_total; cin.clear(); // allows more reading cin >> next; } return itemnum, price; } } in_stream4.setf(ios::fixed); in_stream4.setf(ios::showpoint); in_stream4.precision(2); while ( in_stream4.eof()) { in_stream4 >> itemnum >> price >> curr_total; cin.clear(); cin >> next; } out_stream5.setf(ios::fixed); out_stream5.setf(ios::showpoint); out_stream5.precision(2); out_stream5 <<setw(5)<< " itemnum " <<setw(5)<<" price "<<setw(5)<<" curr_total " <<endl; // sends items and prices to receipt.txt out_stream5 << setw(5) << itemnum << setw(5) <<price << setw(5)<< curr_total; // sends items and prices to receipt.txt out_stream5 << " You have a total of " << wrong++ << " errors " << endl; }

    Read the article

  • What is a good starter-project in Perl?

    - by Vivin Paliath
    A buddy of mine wants to learn Perl. He asked me how to go about it. I told him: To learn Perl, you must first write Perl code. This was seconded by another buddy of mine who writes a lot of good Perl code. It's very zen, but not helpful. The problem is that this is exactly how I learnt to write Perl. At my very first job I had to implement something in Perl and I pretty much just jumped into it and waded and stumbled around until I figured it out. I was thinking that the best way for him to learn Perl would be to do a small project in Perl. The problem is, I can't think of anything that would be a good starter-project in Perl. For just basic learning and understanding concepts, I have recommended going to PerlMonks, to read Learning Perl, and also to look at Perl Best Practices. Aside from this, I think a good starter-project would be useful for him to get a grasp of the language. Any suggestions?

    Read the article

  • How to unencode escaped XML with xQuery

    - by mbrevoort
    I have a variable in xQuery of type xs:string with the value of an encoded HTML snippet (the content of a twitter tweet). It looks like this: Headlines-Today &#8226; AP sources: &lt;b&gt;Obama&lt;/b&gt; pick for Justice post withdraws : News - Rest Of World - &lt;a href=&quot;http://shar.es/mqMAG&quot;&gt;http://shar.es/mqMAG&lt;/a&gt; When I try to write this out in an HTML block, I need the string to be unescaped so that the HTML snippet will be interpreted by the browser. Instead the string is getting written out as is and the browser is rendering it as just text (so you see <a href="blah.... ). Here's how I'm writing out this string: {$entry/atom:content/text()} How can I have the escaped characters unencoded so it writes < rather tha &lt; ? I've tried to do a replacelike this but it always replaces the &lt; with &lt; ! fn:replace($s, "&lt;", "<")

    Read the article

  • Web.Routing for the site root or homepage

    - by Aquinas
    I am doing some work with Web.Routing, using it to have friendly urls and nice Rest like interfaces to a site that is essentially rendered by a single IHttpHandler. There are no webforms, the handler generates all the html/json and writes it as part of process request. This works well for things like /Sites/Accounting for example, but I can't get it to work for the site root, i.e. '/'. I have tried registering a route with an empty string, with 'default.aspx' (which is the empty aspx file I keep in my root folder to play nice with cassini and iis). I set RouteExistingFiles to false explicitly, but whatever I do when hitting the root url it still opens default.axpx, which has no code it inherits from, and contains a simple h1 tag to show that I've hit it. I don't want to change the default file to redirect to a desired route, I just want the equivalent of a 'default' route that is applied when no other routes are found, similar to MVC. For reference, the previous version of the site didn't use Web.Routing, but had a handler referenced in the web.config that was perfectly capable of intercepting requests for the root or default.aspx. Specs: ASP.NET 3.5sp1, C#, no webforms, MVC or openrasta. Plain old IHttpHandlers.

    Read the article

  • TinyMCE Image Alignment

    - by will.earp.co.uk
    TinyMCE has always been a little difficult to align images. Either the align tag, or adding style="float: left;" has been it solution. Ideally I would just like to add class="left" or class="right" so that I can set the border and margins of the image. Up until now the only way to do this without using the advimage plugin was to insert the image, then select it, the select a style from the style menu. Ideally I should be able to use the align control in the image dialogue to set the alignment class or use the alignment controls on the toolbar when in the main editing window. I have just again started looking at a solution to this, now that IE6 is finally starting to die, I can use CSS attributes in selectors, so IMG[style="float: left;"] {} Works, but I would rather use a class incase there are any other style attributes which will cause the selector to fail. And it doesn't work in IE6, and you know some corporate clients will still be running the bloody thing! So I looked through the TinyMCE documentation and found the formats configuration option, that seems to allow you to specify how tinyMCE applies code for various operations. Here I can add the IMG tag as a selector, and have classes: "left" for the alignleft function. This applies the class correctly when the alignment is selected from the toolbar, but it still writes an inline style when the alignment is selected through the image dialogue. Am I doing something wrong or is there a better way of doing this that will allow my clients to select image alignment from both the image dialogue and the toolbar, whilst applying a class to the image?

    Read the article

  • How do I check if output stream of a socket is closed?

    - by Roman
    I have this code: public void post(String message) { output.close(); final String mess = message; (new Thread() { public void run() { while (true) { try { output.println(mess); System.out.println("The following message was successfully sent:"); System.out.println(mess); break; } catch (NullPointerException e) { try {Thread.sleep(1000);} catch (InterruptedException ie) {} } } } }).start(); } As you can see I close the socket in the very beginning of the code and then try to use it to send some information to another computer. The program writes me "The following message was successfully sent". It means that the NullPointerException was not thrown. So, does Java throw no exception if it tries to use a closed output stream of a socket? Is there a way to check if a socket is closed or opened? ADDED I initialize the socket in the following way: clientSideSocket = new Socket(hostname,port); PrintWriter out = new PrintWriter(clientSideSocket.getOutputStream(), true); browser.output = out;

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >