I have a script which reads data from a csv file. I need to store the data into a database which has already been created as
$ python manage.py syncdb
so, that automated data entry is possible in an easier manner, as available in the django shell.
I am trying to download a data file from a local network share to an iPhone device. I have placed the file on a computer on the network and can view through browsers such as Chrome or Mozilla, from any computer on the local network.
However, Safari on a Mac and the iPhone do not find the file! An example of the URL I use is 'file://computer/SharedDocs/file.csv'.
Why do Safari and the iPhone fail to find the file?
Does anyone know of a Webmail Contact List Importer scripts (ColdFusion, PHP etc) like those used on Twitter and LinkedIn ? I've found some but they are paid for and I want some more bespoke & open.
To clarify a little more I'm not looking for a way to process .csv files :) I'm looking for a bit of code that can logging into gmail, yahoo mail, hotmail, aol and pull out the users address book.
Each day an application creates a file called file_YYYYMMDD.csv where YYYYMMDD is the production date. But sometimes the generation fails and no files are generated for a couple of days.
I'd like an easy way in a bash or sh script to find the filename of the most recent file, which has been produced before a given reference date.
Typical usage: find the last generated file, disregarding those produced after the May 1st.
Thanks for your help
I have some screen scraped tabular data that I want to export to a CSV file (currently I am just placing it in the clipboard), is there anyway to do this in Greasemonkey? Any suggestions on where to look for a sample or some documentation on this kind of functionality?
I am working on the design of a high security application (involving financial information, personal information etc). I need to identify what security measures (application level) will be implemented. The application will involve sending data to and from a database, user login, import export to csv, txt files, and print function.
What security features do I need to consider for such an application. (SQL injection for starters) ?
i am reading a csv file and i want to store this in datastore, but i am getting string from file for datetime field.
i want to type cast it to datetime
also same for date and time separately
error::BadValueError: Property HB_Create_Ship_Date must be a datetime
I am getting passed comma seperated values to a stored procedure in oracle. I want to treat these values as a table so that I can use them in a query like:
select * from tabl_a where column_b in (<csv values passed in>)
What is the best way to do this in 11g?
Right now we are looping through these one by one and inserting them into a gtt which I think is ineffecient.
Any pointers?
Hi there,
I am in need of reorganizing a large CSV file. The first column, which is currently a 6 digit number needs to be split up, using comma's as the field separator.
For example, I need this:
022250,10:50 AM,274,22,50
022255,11:55 AM,275,22,55
turned into this:
0,2,2,2,5,0,10:50 AM,274,22,50
0,2,2,2,5,5,11:55 AM,275,22,55
Let me know what you think!
Thanks!
I'm trying to figure out how to get only the last two files within a folder, so that I can merge them together using c#. The files are csv files and I've looked at File.CreationTime, but don't know exactly how to compare on it, so that I'm working only with the last two files.
How can I do this?
Hi,
I have a 500Mb csv file. I need to convert it into XML file.
I am using the Jaxb to created the xml file. It is working fine for small amout of data.
but for large amout of data like 300 mb it is throwing out of memory exception.
Can anyone tell me that How can I create each element and write it into a file
without creating the whole tree using the jaxb?"
Thanks
Sonu
Hi,
I have a Windows Server that recives mail. These mail contains only 1 single CSV file. I want my server to automatically take the attachment from any incoming mail and send to a java program locally installed. Is there anyone who can give me directions on any programs that fix this or do I need to create some kind of windows service?
Thankful for any help!
I need to import either csv or excel file into a dbase. The column headers will match but I will want to compare the file against the dbase using an ItemID field, list the rows to be affected and the differences, then allow an update to all the rows with the matching ID.
From our existing, internal tracking system I would like to create an XML export that I can then bring into Microsoft Project 2007 to further display and manipulation. I've been unable to find a straightforward explanation of how the XML should look for this kind of import. I want to be able to create dependencies, assign resources, etc. The Excel/CSV imports don't appear to offer all these capabilities so I think XML is the better way...if I could just get a spec for it.
I need a script that can run and pull information from any drive on a Windows operating system (Windows Server 2003), listing all files and folders which contain the following fields: The server is quite big and is within our domain.
The required information is:
Full file path (e.g. C:\Documents and Settings\user\My Documents\testPage.doc)
File type (e.g. word document, spreadsheet, database etc)
Size
When Created
When last modified
When last accessed
Also the script will need to convert that data to a CSV file, which later on I can modify and process in Excel. I can imagine that this data will be huge but I still need it. I am logged in as an administrator on the server and the script will need to also process protected files. As in previous posts I have read that the script will stop if such files are processed. I need to make sure that not a single file is skipped.
Please note I have asked this question before but still have not got a working script.
This is the script I got so far, file Test.vbs:
Set objFS=CreateObject("Scripting.FileSystemObject")
WScript.Echo Chr(34) & "Full Path" &_
Chr(34) & "," & Chr(34) & "File Size" &_
Chr(34) & "," & Chr(34) & "File Date modified" &_
Chr(34) & "," & Chr(34) & "File Date Created" &_
Chr(34) & "," & Chr(34) & "File Date Accessed" & Chr(34)
Set objArgs = WScript.Arguments
strFolder = objArgs(0)
Set objFolder = objFS.GetFolder(strFolder)
Go (objFolder)
Sub Go(objDIR)
If objDIR <> "\System Volume Information" Then
For Each eFolder in objDIR.SubFolders
Go eFolder
Next
End If
For Each strFile In objDIR.Files
WScript.Echo Chr(34) & strFile.Path & Chr(34) & "," &_
Chr(34) & strFile.Size & Chr(34) & "," &_
Chr(34) & strFile.DateLastModified & Chr(34) & "," &_
Chr(34) & strFile.DateCreated & Chr(34) & "," &_
Chr(34) & strFile.DateLastAccessed & Chr(34)
Next
End Sub
I am currently using the command-line to run it:
c:\test> cscript //nologo Test.vbs "c:\" > "C:\test\Output.csv"
The script is not working. I don't know why.
In MongoDB, I have a document with a field called "ClockInTime" that was imported from CSV as a string.
What does an appropriate db.ClockTime.update() statement look like to convert these text based values to a date datatype?
I use the command line sqlite3 executable to check queries I make from my code.
Is there a way to read in pragma statements or other session setup (".mode csv" for example) when the executable starts up?
I know I can do a ".read " once I'm in, but that's tedious.
I have an xls file with ~60 sheets of data. I would like to move them into a database (postgres) such that each sheet's data is stored in a different table.
What is the fastest way of creating these tables? I don't care about naming or proper typing of columns. The columns could all be strings for that matter. I don't want to run 60 different csv uploads.
I have tried the merge function to merge two csv files that I imported. They both have the same variable names and data types but each time I run merge all that I get is an object that contains the names of the two data frames. I have tried the following:
# ex1
obj <- merge(obj1, obj2, by=obj)
# ex2
obj <- merge(obj1, obj2, all)
and several other iterations of the above.
Is merge the correct function?
If so, what am I doing wrong?
Could someone provide the best way to read xls files with python (not csv files).
if there a built in package which supported by default with python to this this task ?
I have a comma delimited list I want to import into a database, and in some cases the last item is blank:
item1, item2, item3
item1, item2,
item1, item2,
I'd like to replace all of these empty columns with a placeholder value using a regexp
item1, item2, item3
item1, item2, PLACEHOLDER
item1, item2, PLACEHOLDER
I tried preg_replace("/,\n/", ",PLACEHOLDER\n",$csv);, but this isn't working. Anyone know what regexp would work for this?
How would i go about creating an application for my web page that can extract data from my database (i currently get the data in a CSV file). id also like the user to be able to filter the data by certain parameters. can u help
I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were:
Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum?
Try Catch Performance Java
Java try catch blocks
However they didn't answer my question with facts, so I decided to try it for myself.
Here's what I did. I have a csv file with this format:
host;ip;number;date;status;email;uid;name;lastname;promo_code;
where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind.
The current code that in inherited in my company does this:
StringTokenizer st=new StringTokenizer(line,";");
String host = st.nextToken();
String ip = st.nextToken();
String number = st.nextToken();
String date = st.nextToken();
String status = st.nextToken();
String email = "";
try{
email = st.nextToken();
}catch(NoSuchElementException e){
email = "";
}
and it repeats what it's done for email with uid, name, lastname and promo_code.
and I changed everything to:
if(st.hasMoreTokens()){
email = st.nextToken();
}
and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times:
--- Trying:122 milliseconds
--- Checking:33 milliseconds
however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is
Does really the try block does not have any performance impact on my code?
The average times for this example are:
--- Trying:105 milliseconds
--- Checking:43 milliseconds
Can somebody explain what's going on here?
Thanks a lot