What could cause the file command in Linux to report a text file as data?

Posted by Jonah Bishop on Super User See other posts from Super User or by Jonah Bishop
Published on 2012-04-11T15:13:35Z Indexed on 2012/04/11 17:33 UTC
Read the original article Hit count: 373

Filed under:
|
|

I have a couple of C++ source files (one .cpp and one .h) that are being reported as type data by the file command in Linux. When I run the file -bi command against these files, I'm given this output (same output for each file):

application/octet-stream; charset=binary

Each file is clearly plain-text (I can view them in vi). What's causing file to misreport the type of these files? Could it be some sort of Unicode thing? Both of these files were created in Windows-land (using Visual Studio 2005), but they're being compiled in Linux (it's a cross-platform application).

Any ideas would be appreciated.

Update: I don't see any null characters in either file. I found some extended characters in the .cpp file (in a comment block), removed them, but file still reports the same encoding. I've tried forcing the encoding in SlickEdit, but that didn't seem to have an effect. When I open the file in vim, I see a [converted] line as soon as I open the file. Perhaps I can get vim to force the encoding?

© Super User or respective owner

Related posts about linux

Related posts about bash