Search Results

Search found 46487 results on 1860 pages for 'reading files'.

Page 60/1860 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Java reading files......

    - by user69514
    Ok this is a homework questions, but I cannot find the answer anywhere, not even in the book. Path to Files If the user wants to specify a path for a file, the typical forward slash is replaced by ________. can you help?

    Read the article

  • safely reading directory contents

    - by Jack
    Is it safe to read directory entries via readdir() or scandir() while files are being created/deleted in this directory? Should I prefer one over the other? When I say "safe" I mean entries returned by these functions are valid and can be operated without crushing the program. Thanks.

    Read the article

  • some files just wont install- linux-image-3.5.0-42-generic

    - by jatin
    It gives the following details of the package operation failing after using the sofware updater on 12.10: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... (Reading database ... (Reading database ... 217566 files and directories currently installed.) Preparing to replace linux-image-3.5.0-36-generic 3.5.0-36.57~precise1 (using .../linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-36-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-36-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic Preparing to replace linux-image-3.5.0-42-generic 3.5.0-42.65~precise1 (using .../linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-42-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-42-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic Preparing to replace memtest86+ 4.20-1.1ubuntu1 (using .../memtest86+_4.20-1.1ubuntu2.1_amd64.deb) ... Unpacking replacement memtest86+ ... dpkg: error processing /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb (--unpack): unable to make backup link of `./boot/memtest86+.bin' before installing new version: Operation not permitted No apport report written because MaxReports is reached already locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb Error in function: Output of df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda6 38G 6.5G 30G 19% / udev 3.5G 4.0K 3.5G 1% /dev tmpfs 1.5G 924K 1.5G 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 3.6G 296K 3.6G 1% /run/shm none 100M 64K 100M 1% /run/user /dev/sda1 197M 87M 111M 44% /boot /dev/sda5 243G 971M 230G 1% /home

    Read the article

  • Roo gem .xlsx files performance problems [closed]

    - by alvaritogf
    I am getting unacceptable performace problems by using the roo gem for reading a file by using XLSX or XLS library from this gem. Someone may suggest me an alternative about how to parse an .XLSX file? parsed_file = Excel.new(filename,false, :ignore) if (file_format.upcase == "XLS") parsed_file = Excelx.new(filename,false, :ignore) if (file_format.upcase == "XLSX") raise t "#{filename} is not an Excel file!" if (!parsed_file) parsed_file.default_sheet = parsed_file.sheets[0]#'Sheet2'#oo.sheets[1] first_row = parsed_file.first_row last_row = parsed_file.last_row first_column = parsed_file.first_column last_column = parsed_file.last_column #logger.info "#### Total Rows:#{last_row}, first_row:#{first_row}, last_row:#{last_row}, first_column:#{first_column}, last_column:#{last_column}" first_row.upto(last_row) do |current_line| # Stuff .... end Thanks

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

  • Issue in Webscrapping in C# : Downloading and parsing zipped text files

    - by user64094
    I am writing an webscrapper, to do the download content from a website. Traversing to the website/URL, triggers the creation of a temporary URL. This new URL has a zipped text file. This zipped file is to be downloaded and parsed. I have written a scrapper in C# using WebClient and its function - DownloadFileAsync(). The zipped file is read from the designated location on a trapped DownloadFileCompleted event. My issue : The Windows 'Open/Save dialog is triggered". This requires user input and the automation is disrupted. Can you suggest a way to bypass the issue ? I am cool with rewriting the code using any alternate libraries. :) Thanks for reading,

    Read the article

  • Reading a POP3 server with only TcpClient and StreamWriter/StreamReader

    - by WebDevHobo
    I'm trying to read mails from my live.com account, via the POP3 protocol. I've found the the server is pop3.live.com and the port if 587. I'm not planning on using a pre-made library, I'm using NetworkStream and StreamReader/StreamWriter for the job. I need to figure this out. So, any of the answers given here: http://stackoverflow.com/questions/44383/reading-email-using-pop3-in-c are not usefull. It's part of a larger program, but I made a small test to see if it works. Eitherway, i'm not getting anything. Here's the code I'm using, which I think should be correct. public Program() { string temp = ""; using(TcpClient tc = new TcpClient(new IPEndPoint(IPAddress.Parse("127.0.0.1"),8000))) { tc.Connect("pop3.live.com",587); using(NetworkStream nws = tc.GetStream()) { using(StreamReader sr = new StreamReader(nws)) { using(StreamWriter sw = new StreamWriter(nws)) { sw.WriteLine("USER " + user); sw.Flush(); sw.WriteLine("PASS " + pass); sw.Flush(); sw.WriteLine("LIST"); sw.Flush(); while(temp != ".") { temp += sr.ReadLine(); } } } } } Console.WriteLine(temp); } So, I'm sending from port 8000 on my machine to port 587, the hotmail pop3 port. And I'm getting nothing, and I'm out of ideas.

    Read the article

  • PDCurses TUI C++ Win32 console app - Access violation reading location

    - by Bach
    I have downloaded pdcurses source and was able to successfully include curses.h in my project, linked the pre-compiled library and all good. After few hours of trying out the library, I saw the tuidemo.c in the demos folder, compiled it into an executable and brilliant! exactly what I needed for my project. Now the problem is that it's a C code, and I am working on a C++ project in VS c++ 2008. The files I need are tui.c and tui.h How can I include that C file in my C++ code? I saw few suggestions here but the compiler was not too happy with 100's of warnings and errors. How can I go on including/using that TUI pdcurses includes!? Thanks EDIT: I added extern "C" statement, so my test looks like this now, but I'm getting some other type of error #include <stdio.h> #include <stdlib.h> using namespace std; extern "C" { #include <tui.h> } void sub0(void) { //do nothing } void sub1(void) { //do nothing } int main (int argc, char * const argv[]) { menu MainMenu[] = { { "Asub", sub0, "Go inside first submenu" }, { "Bsub", sub1, "Go inside second submenu" }, { "", (FUNC)0, "" } /* always add this as the last item! */ }; startmenu(MainMenu, "TUI - 'textual user interface' demonstration program"); return 0; } Although it is compiling successfully, it is throwing an Error at runtime, which suggests a bad pointer: 0xC0000005: Access violation reading location 0x021c52f9 at line startmenu(MainMenu, "TUI - 'textual user interface' demonstration program"); Not sure where to go from here. thanks again.

    Read the article

  • Timeout reading verity collection - CF8

    - by Gary
    For a long time now I've been having a problem with using the verity search service bundled with ColdFusion 8. The issue is with timeout errors occurring when perfoming any operation on a collection. It's intermittent, and usually occurs after a few operations have been successfully performed. For instance: If I'm adding records to a collection the first, say 15 records, will go through with no problems, but all subsequent records will timeout until the service is rebooted. I'm on a shared server, Windows 2008, 64bit as far as I know. The error I receive is: "An error occurred while performing an operation in the Search Engine library. Error reading collection information.: com.verity.api.administration.ConfigurationException: java.io.IOException: Read timed out" Having spoken to my hosting company, and after doing some research, it's been suggested that the number of collections on a server may cause this issue. I've reduced the amount of collections I use, and there are currently 39 collections on the server. As I'm on a shared server, I have no control over how many collections other customers use, however I've read that the limit is 128 collections, so I don't see why 39 should cause it to become unusable. The collections aren't big, there's maybe around 5,000 records between all of them. Any ideas?

    Read the article

  • Reading Xml with XmlReader in C#

    - by Gloria Huang
    I'm trying to read the following Xml document as fast as I can and let additional classes manage the reading of each sub block. <ApplicationPool><Accounts><Account><NameOfKin></NameOfKin><StatementsAvailable><Statement></Statement></StatementsAvailable></Account></Accounts></ApplicationPool> I can't seem to format the above nicely :( However, I'm trying to use the XmlReader object to read each Account and subsequently the "StatementsAvailable". Do you suggest using XmlReader.Read and check each element and handle it? I've thought of seperating my classes to handle each node properly. So theres an AccountBase class that accepts a XmlReader instance that reads the NameOfKin and several other properties about the account. Then I was wanting to interate through the Statements and let another class fill itself out about the Statement (and subsequently add it to an IList). Thus far I have the "per class" part done by doing XmlReader.ReadElementString() but I can't workout how to tell the pointer to move to the StatementsAvailable element and let me iterate through them and let another class read each of those proeprties. Sounds easy!

    Read the article

  • Basic Steps in reading Excel files into matlab

    - by user3693727
    >> [NUM,TXT,RAW]=xlsread('C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1') ??? Error using ==> xlsread at 219 XLSREAD unable to open file C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1. File C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1.xls not found. This is the error that I have received when I try to read a simple Excel file into MATLAB. This is a snapshot of the spreadsheet I would like to load in. Could guide me the basic know-how to extract these data? I have looked through the other questions pertaining to reading Excel files into MATLAB, but I am still very confused. I ultimately wish to extract the file below for my project using the same method. The second image shows the data I have to extract which I could not do. Its file type seems to be different, it is comma separated values file which is not xls. Hence, I am also confuse about whether different file type prevents extraction of data. Thanks you for helping(:

    Read the article

  • Writing String to Stream and reading it back does not work

    - by Binary255
    I want to write a String to a Stream (a MemoryStream in this case) and read the bytes one by one. stringAsStream = new MemoryStream(); UnicodeEncoding uniEncoding = new UnicodeEncoding(); String message = "Message"; stringAsStream.Write(uniEncoding.GetBytes(message), 0, message.Length); Console.WriteLine("This:\t\t" + (char)uniEncoding.GetBytes(message)[0]); Console.WriteLine("Differs from:\t" + (char)stringAsStream.ReadByte()); The (undesired) result I get is: This: M Differs from: ? It looks like it's not being read correctly, as the first char of "Message" is 'M', which works when getting the bytes from the UnicodeEncoding instance but not when reading them back from the stream. What am I doing wrong? The bigger picture: I have an algorithm which will work on the bytes of a Stream, I'd like to be as general as possible and work with any Stream. I'd like to convert an ASCII-String into a MemoryStream, or maybe use another method to be able to work on the String as a Stream. The algorithm in question will work on the bytes of the Stream.

    Read the article

  • C# Error reading two dates from a binary file

    - by Jamie
    Hi all, When reading two dates from a binary file I'm seeing the error below: "The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'. Parameter name: chars" My code is below: static DateTime[] ReadDates() { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Open, System.IO.FileAccess.Read); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryReader br = new System.IO.BinaryReader(appData)) { while (br.PeekChar() > 0) { result.Add(new DateTime(br.ReadInt64())); } br.Close(); } return result.ToArray(); } static void WriteDates(IEnumerable<DateTime> dates) { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Create, System.IO.FileAccess.Write); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryWriter bw = new System.IO.BinaryWriter(appData)) { foreach (DateTime date in dates) bw.Write(date.Ticks); bw.Close(); } } What could be the cause? Thanks

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • Fork in Perl not working inside a while loop reading from file

    - by Sag
    Hi, I'm running a while loop reading each line in a file, and then fork processes with the data of the line to a child. After N lines I want to wait for the child processes to end and continue with the next N lines, etc. It looks something like this: while ($w=<INP>) { # ignore file header if ($w=~m/^\D/) { next;} # get data from line chomp $w; @ws = split(/\s/,$w); $m = int($ws[0]); $d = int($ws[1]); $h = int($ws[2]); # only for some days in the year if (($m==3)and($d==15) or ($m==4)and($d==21) or ($m==7)and($d==18)) { die "could not fork" unless defined (my $pid = fork); unless ($pid) { some instructions here using $m, $d, $h ... } push @qpid,$pid; # when all processors are busy, wait for child processes if ($#qpid==($procs-1)) { for my $pid (@qpid) { waitpid $pid, 0; } reset 'q'; } } } close INP; This is not working. After the first round of processes I get some PID equal to 0, the @qpid array gets mixed up, and the file starts to get read at (apparently) random places, jumping back and forth. The end result is that most lines in the file get read two or three times. Any ideas? Thanks a lot in advance, S.

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • reading csv files in scipy/numpy in Python

    - by user248237
    I am having trouble reading a csv file, delimited by tabs, in python. I use the following function: def csv2array(filename, skiprows=0, delimiter='\t', raw_header=False, missing=None, with_header=True): """ Parse a file name into an array. Return the array and additional header lines. By default, parse the header lines into dictionaries, assuming the parameters are numeric, using 'parse_header'. """ f = open(filename, 'r') skipped_rows = [] for n in range(skiprows): header_line = f.readline().strip() if raw_header: skipped_rows.append(header_line) else: skipped_rows.append(parse_header(header_line)) f.close() if missing: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows, missing=missing) else: if delimiter != '\t': data = genfromtxt(filename, dtype=None, names=with_header, delimiter=delimiter, deletechars='', skiprows=skiprows) else: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows) if data.ndim == 0: data = array([data.item()]) return (data, skipped_rows) the problem is that genfromtxt complains about my files, e.g. with the error: Line #27100 (got 12 columns instead of 16) I am not sure where these errors come from. Any ideas? Here's an example file that causes the problem: #Gene 120-1 120-3 120-4 30-1 30-3 30-4 C-1 C-2 C-5 genesymbol genedesc ENSMUSG00000000001 7.32 9.5 7.76 7.24 11.35 8.83 6.67 11.35 7.12 Gnai3 guanine nucleotide binding protein alpha ENSMUSG00000000003 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Pbsn probasin Is there a better way to write a generic csv2array function? thanks.

    Read the article

  • firefox reading web page from local JS file -- access to restricted URI denied, code: 1012, nsresult

    - by macias
    My problem is -- I have a html file which is really JS program, which reads web pages and show them in customized manner (i.e. it displays the same content in a different way). Basically, I create XMLHttpRequest object and then req.open("GET", web_page_address, false); req.send(""); This gives me (in firefox) an error: Error: uncaught exception: [Exception... "Access to restricted URI denied" code: "1012" nsresult: "0x805303f4 (NS_ERROR_DOM_BAD_URI)" I already googled, and looked at SO but all other issues are very similar with those two exceptions: the file I open in firefox is a local file, opened directly in browser -- I don't have www server running at localhost I don't have any control over the web pages I am reading stuff from So, several solutions I've seen so far (like adding PHP proxy, changing the way external server sends data) cannot be applied here. What else can be done in such case? Another story is I am wondering if such strict security for directly local file has any sense. Thank you in advance for tips/links/etc. Have a nice day!

    Read the article

  • Hadoop/MapReduce: Reading and writing classes generated from DDL

    - by Dave
    Hi, Can someone walk me though the basic work-flow of reading and writing data with classes generated from DDL? I have defined some struct-like records using DDL. For example: class Customer { ustring FirstName; ustring LastName; ustring CardNo; long LastPurchase; } I've compiled this to get a Customer class and included it into my project. I can easily see how to use this as input and output for mappers and reducers (the generated class implements Writable), but not how to read and write it to file. The JavaDoc for the org.apache.hadoop.record package talks about serializing these records in Binary, CSV or XML format. How do I actually do that? Say my reducer produces IntWritable keys and Customer values. What OutputFormat do I use to write the result in CSV format? What InputFormat would I use to read the resulting files in later, if I wanted to perform analysis over them?

    Read the article

  • I'm looking for a way to evaluate reading rate in several languages

    - by i30817
    I have a software that is page oriented instead of scrollbar oriented so i can easily count the words, but i'd like a way to filter outliers and some default value for the text language (that is known). The goal is from the remaining text to calculate the remaining time. I'm not sure what is the best unit to use. WPM (words per minute) from here seems very fuzzy and human oriented. Besides i don't know how many "words" remain in the text. http://www.sfsu.edu/~testing/CalReadRate.htm So i came up with this: The user is reading the text. The total text size in characters is known. His position in the text is known. So the remaining characters to read is also known. If a language has a median word length of say 5 chars, then if i had a WPM speed for the user, i could calculate the remaining time. 3 things are needed for this: 1) A table of the median word length of the language. 2) A table of the median WPM of a median user per language. 3) Update the WPM to fit the user as data becomes available, filtering outliers. However i can't find these tables. And i'm not sure how precise it is assuming median word length.

    Read the article

  • reading non-english html pages with c#

    - by Gal Miller
    I am trying to find a string in Hebrew in a website. The reading code is attached. Afterward I try to read the file using streamReader but I can't match strings in other languages. what am I suppose to do? // used on each read operation byte[] buf = new byte[8192]; // prepare the web page we will be asking for HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://www.webPage.co.il"); // execute the request HttpWebResponse response = (HttpWebResponse) request.GetResponse(); // we will read data via the response stream Stream resStream = response.GetResponseStream(); string tempString = null; int count = 0; FileStream fileDump = new FileStream(@"c:\dump.txt", FileMode.Create); do { count = resStream.Read(buf, 0, buf.Length); fileDump.Write(buf, 0, buf.Length); } while (count > 0); // any more data to read? fileDump.Close();

    Read the article

  • PHPExcel reading columns

    - by user1270150
    I am new to using the PHPExcel and am struggling to implement a basic reader that can be used to read only specified columns, with all rows from a spreadsheet into an array, which I would like to present in a webpage. After reading the supplied documentation and examples, I am struggling to wrap my head round the implementation of a number of examples, so any help would be much appreciated. I am using the following code to get the contents of the default worksheet into an array: for ($row = 2; $row <= $highestRow; $row++) { $dataRow = $objWorksheet->rangeToArray('A'.$row.':'.$highestColumn.$row,null, true, true, true); if ((isset($dataRow[$row]['A'])) && ($dataRow[$row]['A'] > '')) { ++$r; foreach($headingsArray as $columnKey => $columnHeading) { $columnHeading = rtrim($columnHeading); $columnHeading = preg_replace('/\s+/', ' ',$columnHeading); $columnHeading = preg_replace('/\ /', '_',$columnHeading); $columnHeading = strtolower($columnHeading); $namedDataArray[$r][$columnHeading] = $dataRow[$row][$columnKey]; } } } Above code reads through all the columns and rows and builds and array, but I would like to be able to add a configuration array that will declare columns that should be read...sure this is possible, so any help would be much appreciated. Thanks

    Read the article

  • Swt file dialog too much files selected?

    - by InsertNickHere
    Hi there, the swt file dialog will give me an empty result array if I select too much files (approx. 2500files). The listing shows you how I use this dialog. If i select too many sound files, the syso will show 0. Debugging tells me, that the files array is empty in this case. Is there any way to get this work? FileDialog fileDialog = new FileDialog(mainView.getShell(), SWT.MULTI); fileDialog.setText("Choose sound files"); fileDialog.setFilterExtensions(new String[] { new String("*.wav") }); Vector<String> result = new Vector<String>(); fileDialog.open(); String[] files = fileDialog.getFileNames(); for (int i = 0, n = files.length; i < n; i++) { if( !files[i].contains(".wav")) { System.out.println(files[i]); } StringBuffer stringBuffer = new StringBuffer(); stringBuffer.append(fileDialog.getFilterPath()); if (stringBuffer.charAt(stringBuffer.length() - 1) != File.separatorChar) { stringBuffer.append(File.separatorChar); } stringBuffer.append(files[i]); stringBuffer.append(""); String finalName = stringBuffer.toString(); if( !finalName.contains(".wav")) { System.out.println(finalName); } result.add(finalName); } System.out.println(result.size()) ;

    Read the article

  • Reading strings and integers from .txt file and printing output as strings only

    - by screename71
    Hello, I'm new to C++, and I'm trying to write a short C++ program that reads lines of text from a file, with each line containing one integer key and one alphanumeric string value (no embedded whitespace). The number of lines is not known in advance, (i.e., keep reading lines until end of file is reached). The program needs to use the 'std::map' data structure to store integers and strings read from input (and to associate integers with strings). The program then needs to output string values (but not integer values) to standard output, 1 per line, sorted by integer key values (smallest to largest). So, for example, suppose I have a text file called "data.txt" which contains the following three lines: 10 dog -50 horse 0 cat -12 zebra 14 walrus The output should then be: horse zebra cat dog walrus I've pasted below the progress I've made so far on my C++ program: #include <fstream> #include <iostream> #include <map> using namespace std; using std::map; int main () { string name; signed int value; ifstream myfile ("data.txt"); while (! myfile.eof() ) { getline(myfile,name,'\n'); myfile >> value >> name; cout << name << endl; } return 0; myfile.close(); } Unfortunately, this produces the following incorrect output: horse cat zebra walrus If anyone has any tips, hints, suggestions, etc. on changes and revisions I need to make to the program to get it to work as needed, can you please let me know? Thanks!

    Read the article

  • Memcache error: Failed reading line from stream (0) Array

    - by daviddripps
    I get some variation of the following error when our server gets put under any significant load. I've Googled for hours about it and tried everything (including upgrading to the latest versions and clean installs). I've read all the posts about it here on SA, but can't figure it out. A lot of people are having the same problem, but no one seems to have a definitive answer. Any help would be greatly appreciated. Thanks in advance. Fatal error: Uncaught exception 'Zend_Session_Exception' with message 'Zend_Session::start() - /var/www/trunk/library/Zend/Cache/Backend/Memcached.php(Line:180): Error #8 Memcache::get() [memcache.get]: Server localhost (tcp 11211) failed with: Failed reading line from stream (0) Array We have a copy of our production environment for testing and everything works great until we start load-testing. I think the biggest object stored is about 170KB, but it will probably be about 500KB when all is said and done (well below the 1MB limit). Just FYI: Memcache gets hit about 10-20 times per page load. Here's the memcached settings: PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="" I'm running Memcache 1.4.5 with version 2.2.6 of the PHP-memcache module. PHP is version 5.2.6. memcache details from php -i: memcache memcache support = enabled Active persistent connections = 0 Version = 2.2.6 Revision = $Revision: 303962 $ Directive = Local Value = Master Value memcache.allow_failover = 1 = 1 memcache.chunk_size = 8192 = 8192 memcache.default_port = 11211 = 11211 memcache.default_timeout_ms = 1000 = 1000 memcache.hash_function = crc32 = crc32 memcache.hash_strategy = standard = standard memcache.max_failover_attempts = 20 = 20 Thanks everyone

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >