Search Results

Search found 1639 results on 66 pages for 'csv'.

Page 44/66 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • python django automated data addition

    - by zubin71
    I have a script which reads data from a csv file. I need to store the data into a database which has already been created as $ python manage.py syncdb so, that automated data entry is possible in an easier manner, as available in the django shell.

    Read the article

  • iPhone Network Share Access

    - by user361988
    I am trying to download a data file from a local network share to an iPhone device. I have placed the file on a computer on the network and can view through browsers such as Chrome or Mozilla, from any computer on the local network. However, Safari on a Mac and the iPhone do not find the file! An example of the URL I use is 'file://computer/SharedDocs/file.csv'. Why do Safari and the iPhone fail to find the file?

    Read the article

  • Webmail Contact List Importer

    - by Andy Jarrett
    Does anyone know of a Webmail Contact List Importer scripts (ColdFusion, PHP etc) like those used on Twitter and LinkedIn ? I've found some but they are paid for and I want some more bespoke & open. To clarify a little more I'm not looking for a way to process .csv files :) I'm looking for a bit of code that can logging into gmail, yahoo mail, hotmail, aol and pull out the users address book.

    Read the article

  • [bash] files indexed by production date

    - by caas
    Each day an application creates a file called file_YYYYMMDD.csv where YYYYMMDD is the production date. But sometimes the generation fails and no files are generated for a couple of days. I'd like an easy way in a bash or sh script to find the filename of the most recent file, which has been produced before a given reference date. Typical usage: find the last generated file, disregarding those produced after the May 1st. Thanks for your help

    Read the article

  • Save File using Greasemonkey

    - by Kris Erickson
    I have some screen scraped tabular data that I want to export to a CSV file (currently I am just placing it in the clipboard), is there anyway to do this in Greasemonkey? Any suggestions on where to look for a sample or some documentation on this kind of functionality?

    Read the article

  • Application security issues to consider

    - by user279521
    I am working on the design of a high security application (involving financial information, personal information etc). I need to identify what security measures (application level) will be implemented. The application will involve sending data to and from a database, user login, import export to csv, txt files, and print function. What security features do I need to consider for such an application. (SQL injection for starters) ?

    Read the article

  • Good way to deal with comma seperated values in oracle

    - by dmitry
    I am getting passed comma seperated values to a stored procedure in oracle. I want to treat these values as a table so that I can use them in a query like: select * from tabl_a where column_b in (<csv values passed in>) What is the best way to do this in 11g? Right now we are looping through these one by one and inserting them into a gtt which I think is ineffecient. Any pointers?

    Read the article

  • Use Awk to Print every character as its own column?

    - by wizkid84
    Hi there, I am in need of reorganizing a large CSV file. The first column, which is currently a 6 digit number needs to be split up, using comma's as the field separator. For example, I need this: 022250,10:50 AM,274,22,50 022255,11:55 AM,275,22,55 turned into this: 0,2,2,2,5,0,10:50 AM,274,22,50 0,2,2,2,5,5,11:55 AM,275,22,55 Let me know what you think! Thanks!

    Read the article

  • Jaxb to generate the XML directly to the OutputStream

    - by sonu
    Hi, I have a 500Mb csv file. I need to convert it into XML file. I am using the Jaxb to created the xml file. It is working fine for small amout of data. but for large amout of data like 300 mb it is throwing out of memory exception. Can anyone tell me that How can I create each element and write it into a file without creating the whole tree using the jaxb?" Thanks Sonu

    Read the article

  • Pushing a mail include to a java program

    - by Marthin
    Hi, I have a Windows Server that recives mail. These mail contains only 1 single CSV file. I want my server to automatically take the attachment from any incoming mail and send to a java program locally installed. Is there anyone who can give me directions on any programs that fix this or do I need to create some kind of windows service? Thankful for any help!

    Read the article

  • Bulk Compare, Report, Update

    - by Tim Donaldon
    I need to import either csv or excel file into a dbase. The column headers will match but I will want to compare the file against the dbase using an ItemID field, list the rows to be affected and the differences, then allow an update to all the rows with the matching ID.

    Read the article

  • What is the XML spec for importing into Microsoft Project?

    - by montek
    From our existing, internal tracking system I would like to create an XML export that I can then bring into Microsoft Project 2007 to further display and manipulation. I've been unable to find a straightforward explanation of how the XML should look for this kind of import. I want to be able to create dependencies, assign resources, etc. The Excel/CSV imports don't appear to offer all these capabilities so I think XML is the better way...if I could just get a spec for it.

    Read the article

  • VBScript Issue Help Required.

    - by MalsiaPro
    I need a script that can run and pull information from any drive on a Windows operating system (Windows Server 2003), listing all files and folders which contain the following fields: The server is quite big and is within our domain. The required information is: Full file path (e.g. C:\Documents and Settings\user\My Documents\testPage.doc) File type (e.g. word document, spreadsheet, database etc) Size When Created When last modified When last accessed Also the script will need to convert that data to a CSV file, which later on I can modify and process in Excel. I can imagine that this data will be huge but I still need it. I am logged in as an administrator on the server and the script will need to also process protected files. As in previous posts I have read that the script will stop if such files are processed. I need to make sure that not a single file is skipped. Please note I have asked this question before but still have not got a working script. This is the script I got so far, file Test.vbs: Set objFS=CreateObject("Scripting.FileSystemObject") WScript.Echo Chr(34) & "Full Path" &_ Chr(34) & "," & Chr(34) & "File Size" &_ Chr(34) & "," & Chr(34) & "File Date modified" &_ Chr(34) & "," & Chr(34) & "File Date Created" &_ Chr(34) & "," & Chr(34) & "File Date Accessed" & Chr(34) Set objArgs = WScript.Arguments strFolder = objArgs(0) Set objFolder = objFS.GetFolder(strFolder) Go (objFolder) Sub Go(objDIR) If objDIR <> "\System Volume Information" Then For Each eFolder in objDIR.SubFolders Go eFolder Next End If For Each strFile In objDIR.Files WScript.Echo Chr(34) & strFile.Path & Chr(34) & "," &_ Chr(34) & strFile.Size & Chr(34) & "," &_ Chr(34) & strFile.DateLastModified & Chr(34) & "," &_ Chr(34) & strFile.DateCreated & Chr(34) & "," &_ Chr(34) & strFile.DateLastAccessed & Chr(34) Next End Sub I am currently using the command-line to run it: c:\test> cscript //nologo Test.vbs "c:\" > "C:\test\Output.csv" The script is not working. I don't know why.

    Read the article

  • fastest way to upload an xls file into a database

    - by shmichael
    I have an xls file with ~60 sheets of data. I would like to move them into a database (postgres) such that each sheet's data is stored in a different table. What is the fastest way of creating these tables? I don't care about naming or proper typing of columns. The columns could all be strings for that matter. I don't want to run 60 different csv uploads.

    Read the article

  • Merge two data frames together that have the same variable names and data types

    - by Brandon
    I have tried the merge function to merge two csv files that I imported. They both have the same variable names and data types but each time I run merge all that I get is an object that contains the names of the two data frames. I have tried the following: # ex1 obj <- merge(obj1, obj2, by=obj) # ex2 obj <- merge(obj1, obj2, all) and several other iterations of the above. Is merge the correct function? If so, what am I doing wrong?

    Read the article

  • Match a comma followed by a newline with a regular expression

    - by MarathonStudios
    I have a comma delimited list I want to import into a database, and in some cases the last item is blank: item1, item2, item3 item1, item2, item1, item2, I'd like to replace all of these empty columns with a placeholder value using a regexp item1, item2, item3 item1, item2, PLACEHOLDER item1, item2, PLACEHOLDER I tried preg_replace("/,\n/", ",PLACEHOLDER\n",$csv);, but this isn't working. Anyone know what regexp would work for this?

    Read the article

  • Database web app

    - by Watergaite
    How would i go about creating an application for my web page that can extract data from my database (i currently get the data in a CSV file). id also like the user to be able to filter the data by certain parameters. can u help

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >