Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 830/1620 | < Previous Page | 826 827 828 829 830 831 832 833 834 835 836 837  | Next Page >

  • Entities used to serialize data have changed. How can the serialized data be upgraded for the new entities?

    - by i8abug
    Hi, I have a bunch of simple entity instances that I have serialized to a file. In the future, I know that the structure of these entities (ie, maybe I will rename Name to Header or something). The thing is, I don't want to lose the data that I have saved in all these old files. What is the proper way to either load the data from the old entities into new entities upgrade the old files so that they can be used with new entities Note: I think I am stuck with binary serialization, not xml serialization. Thanks in advance! Edit: So I have an answer for the case I have described. I can use a dataContractSerializer and do something like [DataMember("bar")] private string foo; and change the name in the code and keep the same name that was used for serialization. But what about the following additional cases: The original entity has new members which can be serialized Some serialized members that were in the original entity are removed Some members have actually changed in function (suppose that the original class had a FirstName and LastName member and it has been refactored to have only a FullName member which combines the two) To handle these, I need some sort of interpreter/translator deserialization class but I have no idea what I should use

    Read the article

  • Sockets and COBOL

    - by kati
    I have received a job at a hospital which still uses COBOL for all organizational work, the whole (now 20 Terabyte) database (Which was a homebrew in, guess what, COBOL) is filled with the data of every patient since the last 45 (or so) years. So that was my story. Now to my question: Currently, all sockets were (from what I've seen) implemented by COBOL programs writing their data into files. These files then were read out by C++ programs (That was an additional module added in the late 1980s) and using C++ sockets sent to the database. Now this solution has stopped working as they are moving the database from COBOL to COBOL, yes - they didn't use MySQL or so - they implemented a new database - again in COBOL. I asked the guy that worked there before me (hes around 70 now) why the hell someone would do that and he told me that he is so good at COBOL that he doesn't want to write it in any other language. So far so good now my question: How can I implement socket connections in COBOL? I need to create an interface to the external COBOL database located at, for example, 192.168.1.23:283.

    Read the article

  • Java socket bug on linux (0xFF sent, -3 received)

    - by Marius
    While working on a WebSocket server in Java I came across this strange bug. I've reduced it down to two small java files, one is the server, the other is the client. The client simply sends 0x00, the string Hello and then 0xFF (per the WebSocket specification). On my windows machine, the server prints the following: Listening byte: 0 72 101 108 108 111 recieved: 'Hello' While on my unix box the same code prints the following: Listening byte: 0 72 101 108 108 111 -3 Instead of receiving 0xFF it gets -3, never breaks out of the loop and never prints what it has received. The important part of the code looks like this: byte b = (byte)in.read(); System.out.println("byte: "+b); StringBuilder input = new StringBuilder(); b = (byte)in.read(); while((b & 0xFF) != 0xFF){ input.append((char)b); System.out.print(b+" "); b = (byte)in.read(); } inputLine = input.toString(); System.out.println("recieved: '" + inputLine+"'"); if(inputLine.equals("bye")){ break; } I've also uploaded the two files to my server: Server.java Client.java My Windows machine is running windows 7 and my Linux machine is running Debian

    Read the article

  • Random Loss of precision in Python ReadLine()

    - by jackyouldon
    Hi all, We have a process which takes a very large csv (1.6GB) and breaks it down into pieces (in this case 3). This runs nightly and normally doesn't give us any problems. When it ran last night, however, the first of the output files had lost precision on the numeric fields in the data. The active ingredient in the script are the lines: while lineCounter <= chunk: oOutFile.write(oInFile.readline()) lineCounter = lineCounter + 1 and the normal output might be something like StringField1; StringField2; StringField3; StringField4; 1000000; StringField5; 0.000054454 etc. On this one occasion and in this one output file the numeric fields were all output with 6 zeros at the end i.e. StringField1; StringField2; StringField3; StringField4; 1000000.000000; StringField5; 0.000000 We are using Python v2.6 (and don't want to upgrade unless we really have to) but we can't afford to lose this data. Does anyone have any idea why this might have happened? If the readline is doing some kind of implicit conversion is there a way to do a binary read, because we really just want this data to pass through untouched? It is very wierd to us that this only affected one of the output files generated by the same script, and when it was rerun the output was as expected. thanks Jack

    Read the article

  • Inconsistent canvas drawing in Android browser

    - by user2943466
    In putting together a small canvas app I've stumbled across a weird behavior that only seems to occur in the default browser in Android. When drawing to a canvas that has the globalCompositeOperation set to 'destination-out' to act as the 'eraser' tool, Android browser sometimes acts as expected, sometimes does not update the pixels in the canvas at all. the setup: context.clearRect(0,0, canvas.width, canvas.height); context.drawImage(img, 0, 0, canvas.width, canvas.height); context.globalCompositeOperation = 'destination-out'; draw a circle to erase pixels from the canvas: context.fillStyle = '#FFFFFF'; context.beginPath(); context.arc(x,y,25,0,TWO_PI,true); context.fill(); context.closePath(); a small demo to illustrate the issue can be seen here: http://gumbojuice.com/files/source-out/ and the javascript is here: http://gumbojuice.com/files/source-out/js/main.js this has been tested in multiple desktop and mobile browsers and behaves as expected. On Android native browser after refreshing the page sometimes it works, sometimes nothing happens. I've seen other hacks that move the canvas by a pixel in order to force a redraw but this is not an ideal solution.. Thanks all.

    Read the article

  • Unable to access index for repository error?

    - by Tommy O'Dell
    I've just created a package (RTIO) and a package repository (Q:/Integrated Planning/R), which is a company network drive. I've put my package into the folder: Q:/Integrated Planning/R/bin/windows/contrib/2.15.1/RTIO_0.1-2.zip As per Dirk's instructions in this SO, I've run the following commands: > setwd("Q:/Integrated Planning/R/bin/windows/contrib/2.15.1") > tools::write_PACKAGES(".", type="win.binary") > list.files() [1] "PACKAGES" "PACKAGES.gz" "RTIO_0.1-2.zip" > With the code below, I've added the local repository to my list of repos (and I'll get other users to do the same): options(repos = c(getOption("repos"), RioTintoIronOre = "Q:/Integrated Planning/R")) And now trying to install my package I get an error: > install.packages("RTIO") Installing package(s) into ‘C:/Program Files/R/R-2.15.1/library’ (as ‘lib’ is unspecified) Warning in install.packages : unable to access index for repository Q:/Integrated Planning/R/bin/windows/contrib/2.15 Warning in install.packages : unable to access index for repository Q:/Integrated Planning/R/bin/windows/contrib/2.15 Warning in install.packages : unable to access index for repository Q:/Integrated Planning/R/bin/windows/contrib/2.15 Warning in install.packages : package ‘RTIO’ is not available (for R version 2.15.1) What does unable to access index for repository tell me? And how can I fix it? What I'm really looking to do is to do, under Windows and with RStudio as the IDE, is to let other internal R users add this package repo such that they're able to run commands like install.packages("RTIO") or update.packages() (and presumably use the IDE to manage packages via the GUI)?

    Read the article

  • php memory how much is too much

    - by Rob
    I'm currently re-writing my site using my own framework (it's very simple and does exactly what I need, i've no need for something like Zend or Cake PHP). I've done alot of work in making sure everything is cached properly, caching pages in files so avoid sql queries and generally limiting the number of sql queries. Overall it looks like it's very speedy. The average time taken for the front page (taken over 100 times) is 0.046152 microseconds. But one thing i'm not sure about is whether i've done enough to reduce php memory usage. The only time i've ever encountered problems with it is when uploading large files. Using memory_get_peak_usage(TRUE), which I THINK returns the highest amount of memory used whilst the script has been running, the average (taken over 100 times) is 1572864 bytes. Is that good? I realise you don't know what it is i'm doing (it's rather simple, get the 10 latest articles, the comment count for each, get the user controls, popular tags in the sidebar etc). But would you be at all worried with a script using that sort of memory getting hit 50,000 times a day? Or once every second at peak times? I realise that this is a very open ended question. Hopefully you can understand that it's a bit of a stab in the dark and i'm really just looking for some re-assurance that it's not going to die horribly come re-launch day.

    Read the article

  • AJAX Uploading - Not waiting for response before continuing

    - by waxical
    I'm using Blueimp's jQuery Uploader (very good it is too btw) and an S3 handler to upload files and then transfer them to S3 via the S3 API (from the PHP SDK). It works. The problem is, on large files (1GB) it can take anything up to a a few minutes to transfer (via create-object) onto S3. The PHP file that does this is hung-up until this process is complete. The problem is, the uploader (which utilises the jQuery Ajax method) seems to give up waiting and start again everytime. I have thought this was related to PHP INI 'max_input_time' or such, as it seemed to wait around 60 seconds, though this now appears to vary. I have upped the max_input_time in PHP INI and others related - but no further. I've also considered (the more likely) that JS, either in the script or the jQuery method has a timeout. The developer (blueimp) has said there's no such timeout in the front-end script, nor have I seen any and though 'timeout' is referenced in the jQuery Ajax method options, it seems to affect the entire time it uploads rather than the wait for a response - so that's not much use. Any help or guidance gratefully received.

    Read the article

  • Problem monitoring directory for file activity in VB.net 2010...

    - by Mike Cialowicz
    I'm trying to write a simple program to monitor a folder for new files in VB.NET 2010, and am having some trouble. Here's a simplified version of what my program looks like: Imports System.IO Public Class Main Public fileWatcher As FileSystemWatcher Sub btnGo_Click(sender As System.Object, e As System.EventArgs) Handles btnGo.Click '//# initialize my FileSystemWatcher to monitor a particular directory for new files fileWatcher = New FileSystemWatcher() fileWatcher.Path = thisIsAValidPath.ToString() fileWatcher.NotifyFilter = NotifyFilters.FileName AddHandler fileWatcher.Created, AddressOf fileCreated fileWatcher.EnableRaisingEvents = True End Sub Private Sub fileCreated(sender As Object, e As FileSystemEventArgs) '//# program does not exit when I comment the line below out txtLatestAddedFilePath.Text = e.FullPath '//# e.FullPath is valid when I set a breakpoint here, but when I step into the next line, the program abruptly halts with no error code that I can see End Sub End Class As you can see, I have a button which will initialize a FileSystemWatcher when clicked. The initialization works, and when I place a new file in the monitored directory, the program reaches the fileCreated sub. I can even see that e.FullPath is set correctly. However, it exits abruptly right after that with no error code (none that I can see, anyways). If I comment everything in the fileCreated sub out, the program continues running as expected. Any ideas as to why it's dying on me? Any help would be greatly appreciated. I'm fairly new to VS/VB.NET, so maybe I'm just making a silly mistake. Thanks!

    Read the article

  • How to debug properly and find causes for crashes?

    - by Newbie
    I dont know what to do anymore... its hopeless. I'm getting tired of guessing whats causing the crashes. Recently i noticed some opengl calls crashes programs randomly on some gfx cards. so i am getting really paranoid what can cause crashes now. The bad thing on this crash is that it crashes only after a long time of using the program, so i can only guess what is the problem. I cant remember what changes i made to the program that may cause the crashes, its been so long time. But luckily the previous version doesnt crash, so i could just copypaste some code and waste 10 hours to see at which point it starts crashing... i dont think i want to do that yet. The program crashes after i make it to process the same files about 5 times in a row, each time it uses about 200 megabytes of memory in the process. It crashes at random times while and after the reading process. I have createn a "safe" free() function, it checks the pointer if its not NULL, and then frees the memory, and then sets the pointer to NULL. Isn't this how it should be done? I watched the task manager memory usage, and just before it crashed it started to eat 2 times more memory than usual. Also the program loading became exponentially slower every time i loaded the files; first few loads didnt seem much slower from each other, but then it started rapidly doubling the load speeds. What should this tell me about the crash? Also, do i have to manually free the c++ vectors by using clear() ? Or are they freed after usage automatically, for example if i allocate vector inside a function, will it be freed every time the function has ended ? I am not storing pointers in the vector. -- Shortly: i want to learn to catch the damn bugs as fast as possible, how do i do that? Using Visual Studio 2008.

    Read the article

  • PHP Mail() not working on remote server

    - by Amaerth
    I am developing an application and have been testing the mail() function in PHP. The following works just fine on my local machine to send emails to myself, but as soon as I try to send it from the testing environment to my local machine, it silently fails. I will still get the "Mail Sent" message, but no message is sent. I turned on the mail logging in the php.ini file, but even that doesn't seem to be populated after I refresh the page. Again, the .php files and php.ini files are identical in both environments. Port 25 has been opened on the testing environment, and we are using a Microsoft Exchange server. <?php $to = "[email protected]"; $subject = "Test mail"; $message = "Hello! This is a simple email message."; $from = "[email protected]"; $headers = "From:" . $from; mail($to,$subject,$message,$headers); echo "Mail Sent."; ?> SMTP area of the php.ini file: [mail function] ; For Win32 only. ; http://php.net/smtp SMTP = exhange.server.org ; http://php.net/smtp-port smtp_port = 25 ; For Win32 only. ; http://php.net/sendmail-from sendmail_from = [email protected]

    Read the article

  • Winlibre - An Aptitude-Synaptic for Windows. Would that be useful?

    - by acadavid
    Hi everyone. Last year, in 2009 GSoC, I participated with an organization called Winlibre. The basic idea is having a project similar to Aptitude (or Apt-get) and a GUI like Synaptic but for Windows and just to hold (initially), only open source software. The project was just ok, we finished what we considered was a good starting point but unfortunately, due to different occupations of the developers, the project has been idle almost since GSoC finished. Now, I have some energy, time and interest to try to continue this development. The project was divided in 3 parts: A repository server (which i worked on, and which was going to store and serve packages and files), a package creator for developers, and the main app, which is apt-get and its GUI. I have been thinking about the project, and the first question that came to my mind is.. actually is this project useful for developers and Windows users? Keep in mind that the idea is to solve dependencies problems, and install packages "cleanly". I'm not a Windows developer and just a casual user, so i really don't have a lot of experience on how things are handled there, but as far as I have seen, all installers handle those dependencies. Will windows developers be willing to switch from installers to a packages way of handling installations of Open source Software? Or it's just ok to create packages for already existing installers? The packages concept is basically the same as .deb or .rpm files. I still have some other questions, but basically i would like to make sure that it's useful in someway to users and Windows developers, and if developers would find this project interesting. If you have any questions, feedback, suggestions or criticisms, please don't mind about posting them. Thanks!!

    Read the article

  • How can I get back into my main processing thread?

    - by daveomcd
    I have an app that I'm accessing a remote website with NSURLConnection to run some code and then save out some XML files. I am then accessing those XML Files and parsing through them for information. The process works fine except that my User Interface isn't getting updated properly. I want to keep the user updated through my UILabel. I'm trying to update the text by using setBottomBarToUpdating:. It works the first time when I set it to "Processing Please Wait..."; however, in the connectionDidFinishLoading: it doesn't update. I'm thinking my NSURLConnection is running on a separate thread and my attempt with the dispatch_get_main_queue to update on the main thread isn't working. How can I alter my code to resolve this? Thanks! [If I need to include more information/code just let me know!] myFile.m NSLog(@"Refreshing..."); dispatch_sync( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ [self getResponse:@"http://mylocation/path/to/file.aspx"]; }); [self setBottomBarToUpdating:@"Processing Please Wait..."]; queue = dispatch_queue_create("updateQueue", DISPATCH_QUEUE_CONCURRENT); connectionDidFinishLoading: if ([response rangeOfString:@"Complete"].location == NSNotFound]) { // failed } else { //success dispatch_async(dispatch_get_main_queue(),^ { [self setBottomBarToUpdating:@"Updating Contacts..."]; }); [self updateFromXMLFile:@"http://thislocation.com/path/to/file.xml"]; dispatch_async(dispatch_get_main_queue(),^ { [self setBottomBarToUpdating:@"Updating Emails..."]; }); [self updateFromXMLFile:@"http://thislocation.com/path/to/file2.xml"]; }

    Read the article

  • Import Excel into Rails app

    - by Jack
    Hi, I am creating a small rails app for personal use and would like to be able to upload excel files to later be validated and added to the database. I had this working previously with csv files, but this has since become impractical. Does anyone know of a tutorial for using the roo or spreadsheet gem to upload the file, display the contents to the user and then add to the database (after validating)? I know this is quite specific, but I want to work through this step by step. All I have so far is an 'import' view: <% form_for :dump, :url=>{:controller=>"students", :action=>"student_import"}, :html => { :multipart => true } do |f| -%> Select an Excel File : <%= f.file_field :excel_file -%> <%= submit_tag 'Submit' -%> <% end -%> But have no idea how to access this uploaded file in the controller. Any suggestions/help would be welcomed. Thanks

    Read the article

  • Migrate from Oracle to MySQL

    - by Cassy
    Hi together. We ran into serious performance problems with our Oracle database and we would like to try to migrate to a MySQL-based database (either MySQL directly or, more preferrable, Infobright). The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all features of the new database match our needs. So, here is our situation: The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration. Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel. But, and here lies the challenge, for this to work, we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day. Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into mysql. Exporting tables in CSV files might work, but I'm afraid it could take too long. So my question now is: What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration? Thanks in advance, Cassy PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-)

    Read the article

  • Java redirected system output to jtext area, doesnt update until calculation is finished

    - by user1806716
    I have code that redirects system output to a jtext area, but that jtextarea doesnt update until the code is finished running. How do I modify the code to make the jtextarea update in real time during runtime? private void updateTextArea(final String text) { SwingUtilities.invokeLater(new Runnable() { public void run() { consoleTextAreaInner.append(text); } }); } private void redirectSystemStreams() { OutputStream out = new OutputStream() { @Override public void write(int b) throws IOException { updateTextArea(String.valueOf((char) b)); } @Override public void write(byte[] b, int off, int len) throws IOException { updateTextArea(new String(b, off, len)); } @Override public void write(byte[] b) throws IOException { write(b, 0, b.length); } }; System.setOut(new PrintStream(out, true)); System.setErr(new PrintStream(out, true)); } The rest of the code is mainly just an actionlistener for a button: private void updateButtonActionPerformed(java.awt.event.ActionEvent evt) { // TODO add your handling code here: String shopRoot = this.shopRootDirTxtField.getText(); String updZipPath = this.updateZipTextField.getText(); this.mainUpdater = new ShopUpdater(new File(shopRoot), updZipPath); this.mainUpdater.update(); } That update() method begins the process of copying+pasting files on the file system and during that process uses system.out.println to provide an up-to-date status on where the program is currently at in reference to how many more files it has to copy.

    Read the article

  • How does a java compiler resolve a non-imported name

    - by gexicide
    Consider I use a type X in my java compilation unit from package foo.bar and X is not defined in the compilation unit itself nor is it directly imported. How does a java compiler resolve X now efficiently? There are a few possibilities where X could reside: X might be imported via a star import a.b.* X might reside in the same package as the compilation unit X might be a language type, i.e. reside in java.lang The problem I see is especially (2.). Since X might be a package-private type, it is not even required that X resides in a compilation unit that is named X.java. Thus, the compiler must look into all entries of the class path and search for any classes in a package foo.bar, it then must read every class that is in package foo.bar to check whether X is included. That sounds very expensive. Especially when I compile only a single file, the compiler has to read dozens of class files only to find a type X. If I use a lot of star imports, this procedure has to be repeated for a lot of types (although class files won't be read twice, of course). So is it advisable to import also types from the same package to speed up the compilation process? Or is there a faster method for resolving an unimported type X which I was not able to find?

    Read the article

  • How to use XML namespace prefixes without xmlns="..." everywhere? (.NET)

    - by LonelyPixel
    The subject is probably too short to explain it... I'm writing out XML files with no namespace stuff at all, for some application. That part I cannot change. But now I'm going to extend those files with my own application-defined element names, and I'd like to put them in a different namespace. For this, the result should look like this: <doc xmlns:x="urn:my-app-uri"> <a>existing element name</a> <x:addon>my additional element name</x:addon> </doc> I've used an XmlNamespaceManager and added my URI with the prefix "x" to it. I've also passed it to each CreateElement for my additional element names. But the nearest I can get is this: <doc> <a>existing element name</a> <addon xmlns="urn:my-app-uri">my additional element name</addon> </doc> Or maybe also <x:addon xmlns:x="urn:my-app-uri">my additional element name</x:addon> So the point is that my URI is written to every single of my additional elements, and no common prefix is written to the document element where I'd like to have it. How can I get the above XML result with .NET?

    Read the article

  • directory traversal in Java using different regular and enhanced for loops

    - by user3245621
    I have this code to print out all directories and files. I tried to use recursive method call in for loop. With enhanced for loop, the code prints out all the directories and files correctly. But with regular for loop, the code does not work. I am puzzled by the difference between regular and enhanced for loops. public class FileCopy { private File[] childFiles = null; public static void main(String[] args) { FileCopy fileCopy = new FileCopy(); File srcFile = new File("c:\\temp"); fileCopy.copyTree(srcFile); } public void copyTree(File file){ if(file.isDirectory()){ System.out.println(file + " is a directory. "); childFiles = file.listFiles(); /*for(int j=0; j<childFiles.length; j++){ copyTree(childFiles[j]); } This part is not working*/ for(File a: childFiles){ copyTree(a); } return; } else{ System.out.println(file + " is a file. "); return; } } }

    Read the article

  • How to make Finder 'Open With' work for my application (XCode, OS X)?

    - by Adion
    I have created an application that is capable of playing audio files. This in itself works fine, and so does drag&drop from finder to my application. What I would like as well, is that people can use my application from Finder using the Open With menu (or even allow them to set my application as default for a certain file type) After a lot of searching, I found that I should configure a document type in XCode (Editing information property lists) I successfully added such a type named 'Music File', with UTI 'public.mp3' When I now right-click an MP3 file, my application is listed in the 'Open With' menu. Trying to use it, my app opens, but I get a warning message saying "The document could not be opened. App cannot open files in the 'Music File' format" It doesn't appear to be passed through the command line as is the case in Windows. My application does support drag&drop from Finder, and this is working fine too. I don't really know where to look next, so it would be great if anyone could point me in the right direction. My application isn't using NSDocument, so the 'Class' field doesn't apply for me I think (and according to the docs this field isn't required, but it doesn't say how to handle it without a Class)

    Read the article

  • jQuery Code Only Fires On Hard Refresh?

    - by Rob Vanders
    The XFBML version of the Facebook Registration plugin only loads HTTPS. I need it to load HTTP so my form does not call a security error mismatch between domains. I wrote this code to get the SRC and rewrite it with out HTTPS It works fine on the first load, however on Chrome and Safari it only loads the first time and on HARD refreshes. It does not load on standard reloads or by pressing "enter" on the address bar. Here is the code $(window).load(function () { // Replace HTTPS with HTTP when frame has loaded $(".subscribe iframe").each(function(){ var source = $(this).attr("src"); //alert(source); var sourceNew = source.replace("https", "http"); // change https to http alert(sourceNew); $(this).attr("src", sourceNew); }); }); I have .HTACCESS set to disable server cache <Files *> Header set Cache-Control: "private, pre-check=0, post-check=0, max-age=0" Header set Expires: 0 Header set Pragma: no-cache </Files> What is causing this to not fire reliably? Thanks

    Read the article

  • How to approach socket programming between C# -> Java (Android)

    - by Alex
    I've recently knocked up a server/client app for Windows & Android that allows one to send a file from Windows to an android phone over a socket connection. It works great for a single file but trying to send multiple files over in a single stream is causing me problems. I've also realised that aside from the binary data, I will need to send messages over the socket to indicate error states and other application messages. I have little experience with network programming and and wondering what is the best way forward. Basically the C# server side of the app just goes into a listening state and uses Socket.SendFile to transmit the file. On Android I use the standard Java Socket.getInputStream() to receive the file. That works great for a single file transfer, but how should I handle multiple files and error/messaging information? Do I need to use a different socket for each file? Should I be using a higher level framework to handle this or can I send everything over the single socket? Any other suggestions for frameworks or learning materials?

    Read the article

  • Android SDK and Java

    - by Soonts
    Android SDK Manager complains "WARNING: Java not found in your path". Instead of using the information from Windows registry, the software tries to search Java in the default installation folders, and fails (I don't install software in program files because I don't like space characters in my paths). Of course I know how to modify the %PATH% environment variable. The question is — which Java does it need? After installing the latest JDK, I’ve got 4 distinct versions of java.exe file, in the following 4 folders: system32, jre6\bin, jdk1.6.0_26\bin, and jdk1.6.0_26\jre\bin. Size ranges from 145184 to 171808. All of them print version “1.6.0_26” when launched with the “-version” argument. The one in system32 has .exe version “6.0.250.6”, the rest of them is “6.0.260.3”. All 4 files are different (I’ve calculated the MD5 checksums). Q1. Which folder should I add to %PATH% to make the Android SDK happy? Q2. Why does Oracle build that many variants of java.exe of the same version for the same platform? Thanks in advance! P.S. I'm using Windows 7 SP1 x64 home premium, and downloaded the 64-bit version of JDK, jdk-6u26-windows-x64.exe.

    Read the article

  • Fast and efficient way to read a space separated file of numbers into an array?

    - by John_Sheares
    I need a fast and efficient method to read a space separated file with numbers into an array. The files are formatted this way: 4 6 1 2 3 4 5 6 2 5 4 3 21111 101 3 5 6234 1 2 3 4 2 33434 4 5 6 The first row is the dimension of the array [rows columns]. The lines following contain the array data. The data may also be formatted without any newlines like this: 4 6 1 2 3 4 5 6 2 5 4 3 21111 101 3 5 6234 1 2 3 4 2 33434 4 5 6 I can read the first line and initialize an array with the row and column values. Then I need to fill the array with the data values. My first idea was to read the file line by line and use the split function. But the second format listed gives me pause, because the entire array data would be loaded into memory all at once. Some of these files are in the 100 of MBs. The second method would be to read the file in chunks and then parse them piece by piece. Maybe somebody else has a better a way of doing this?

    Read the article

  • Elegant ways to print out a bunch of instance attributes in python 2.6?

    - by wds
    First some background. I'm parsing a simple file format, and wish to re-use the results in python code later, so I made a very simple class hierarchy and wrote the parser to construct objects from the original records in the text files I'm working from. At the same time I'd like to load the data into a legacy database, the loader files for which take a simple tab-separated format. The most straightforward way would be to just do something like: print "%s\t%s\t....".format(record.id, record.attr1, len(record.attr1), ...) Because there are so many columns to print out though, I thought I'd use the Template class to make it a bit easier to see what's what, i.e.: templ = Template("$id\t$attr1\t$attr1_len\t...") And I figured I could just use the record in place of the map used by a substitute call, with some additional keywords for derived values: print templ.substitute(record, attr1_len=len(record.attr1), ...) Unfortunately this fails, complaining that the record instance does not have an attribute __getitem__. So my question is twofold: do I need to implement __getitem__ and if so how? is there a more elegant way for something like this where you just need to output a bunch of attributes you already know the name for?

    Read the article

< Previous Page | 826 827 828 829 830 831 832 833 834 835 836 837  | Next Page >