Search Results

Search found 39200 results on 1568 pages for 'zip files'.

Page 60/1568 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Converting Visio (.vsd) files to pdf automatically

    - by Aseques
    I am trying to create a scheduled task to convert all my .vsd files to pdf so all of our devices can read them (linux, mac, smartphones, etc..) and I would prefer not paying for something that can be done with Visio + PDFcreator. The approach of using openoffice doesn't work with .vsd files since it's not a supported format ( Method/tools for batch-converting Microsoft Word files into PDF?) What I've currently is this: 'C:\Program Files\Microsoft Office\Visio11\VISIO.EXE' /pt "Z:\Archive\Files.vsd",-PPDFCREATORPRINTER /nologo That is able to open automatically the document I want and to prepare it to be printed, the only missing part is that it requires me to confirm on the printing dialog. There's some information here: http://support.microsoft.com/kb/314392 but it doesn't explain abotu non interactive printing.

    Read the article

  • rename multiple files with unique name

    - by psaima
    I have a tab-delimited list of hundreds of names in the following format old_name new_name apple orange yellow blue All of my files have unique names and end with *.txt extension and these are in the same directory. I want to write a script that will rename the files by reading my list. So apple.txt should be renamed as orange.txt. I have searched around but I couldn't find a quick way to do this.I can change one file at a time with 'rename' or using perl "perl -p -i -e ’s///g’ *.txt", and few files with sed, but I don't know how I can use my list as input and write a shell script to make the changes for all files in a directory. I don't want to write hundreds of rename command for all files in a shell script. Any suggestions will be most welcome!

    Read the article

  • Converting Visio (.vsd) files to pdf automatically [migrated]

    - by Aseques
    I am trying to create a scheduled task to convert all my .vsd files to pdf so all of our devices can read them (linux, mac, smartphones, etc..) and I would prefer not paying for something that can be done with Visio + PDFcreator. The approach of using openoffice doesn't work with .vsd files since it's not a supported format ( Method/tools for batch-converting Microsoft Word files into PDF?) What I've currently is this: 'C:\Program Files\Microsoft Office\Visio11\VISIO.EXE' /pt "Z:\Archive\Files.vsd",-PPDFCREATORPRINTER /nologo That is able to open automatically the document I want and to prepare it to be printed, the only missing part is that it requires me to confirm on the printing dialog. There's some information here: http://support.microsoft.com/kb/314392 but it doesn't explain abotu non interactive printing.

    Read the article

  • Access denied to EFS encrypted files after PC joins domain

    - by mjmarsh
    I'm experiencing strange behavior with Windows Encrypted File System: I have a machine that is in workgroup mode (not joined to a domain) I encrypt an entire directory structure on the machine (basically a folder and subfolders with data files for my application). My application writes and reads files from the encrypted file hierarchy as a local Windows user (let's call the account 'SecureUser'). This works fine I then join the PC to a domain (Let's call it 'TEST') Afterwards, processes running as the local 'SecureUser' account can't read the files it wrote originally when it was off the domain (What is also strange is that the files are listed as "read only" now and I cannot unset this flag via Windows Explorer or the command line, even though it looks like it succeeds) I then 'un-join' the PC from the domain and everything works again Is there something about changing domain membership on a PC that changes the behavior of EFS so that previously encrypted files cannot be read, even by the originating user? Thanks in advance

    Read the article

  • Does Windows 8 include the Windows Help program (WinHlp32.exe)?

    - by amiregelz
    In 2011, Symantec reported on the use of the Windows Help File (.hlp) extension as an attack vector in targeted attacks. The functionality of the help file permits a call to the Windows API which, in turn, permits shell code execution and the installation of malicious payload files. This functionality is not an exploit, but there by design. Here's the malicious WinHelp files (Bloodhound.HLP.1 & Bloodhound.HLP.2) detection heat map: I would like to know if the Windows Help program exists on my Windows 8 machine by default, because if it does I might need to remove it for security reasons. Does Windows 8 include the Windows Help program (WinHlp32.exe)?

    Read the article

  • Access to certain files but not others

    - by ADW
    Hoping someone can help me as I have, thus far, been unable to solve the issue. I am running a media center utilizing Ubuntu 12.04. I was initially successful accessing media files from the desktop running Ubuntu via my Windows 7 laptop and Roku device. I started backing up a new batch of DVD's I had (into MKV files, like everything else in my media folders) and noticed I cannot access the new files from either the Roku or the laptop. I have not changed any settings in the media folder and verified the shared permissions. The parent folder (Media) is shared (with permission flow-down) while the subfolders (Movies, TV Shows, Music) are not. I have changed the permissions on this to include shared when the access problem arose but with no success. I can only access the original files uploaded an not new files added. Any suggestions??? Thanks in advance for any and all help.

    Read the article

  • TrueCrypt Corrupted Files

    - by B. Knight
    Several months ago, I needed to reorganize my data across multiple external hard drives with my laptops primary hard drive as the go-between. My external hard drives are all encrypted with TrueCrypt. It appears to me that somehow during the transfer of my files between the encrypted external drive an the unencrypted internal drive, the files were transferred "as-is" (in their encrypted state). The files range from very small to very large. It appears that this may have happened during one consecutive transfer session. Has anyone ever experienced this problem, and if so were you able to fix it? Is there a way to recreate the encrypted partition, transfer the files, and then decrypt them to their usable state? Or can the files somehow be decrypted through other means? UPDATE: I am running Windows 7 (x64) HP now, but may have been runninG ENT. then. Toshiba Laptop 650GB HDD / 4GB Mem. Latest version of TC

    Read the article

  • Move files contained in a certain dir to the previous one (centOS)

    - by Alex
    i will try to explain my problem (sorry for my bad english). I have an image gallery with a directory structure like that: images/dir1/subdir1/IMG/files.jpg images/dir1/subdir2/IMG/files.jpg images/dir1/subdir3/IMG/files.jpg images/dir2/subdir1/IMG/files.jpg ....... images/dir109/subdir1/IMG/files.jpg the directory named images contains 109 dirs (dir1,dir2, ... dir109), the 109 dirs totally have 1200 subdirs inside, every subdir contain a dir named IMG with images into it (file1.jpg file2.jpg etc ...), i would like to move all the images contained into every dir named IMG into the previous dir (subdir) to have something like that: images/dir1/subdir1/file1.jpg images/dir1/subdir1/file2.jpg images/dir1/subdir2/file1.jpg ........ images/dir109/subdir1/file.jpg

    Read the article

  • Cannot apply unity --reset after modifying files

    - by Alex Cline
    So I have an idea of what I did wrong, I am just not sure how to fix it. I used the Unity Glass mod: http://www.omgubuntu.co.uk/2012/07/unity-glass-offers-refined-new-look-for-the-unity-launcher After removing it, I cannot reset unity and it does not work. Even after purging Unity and reinstalling it, I cannot seem to replace the missing files. $unity --reset WARNING: Unity currently default profile, so switching to metacity while resetting the values unity-panel-service: no process found Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...no Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1c00027 Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/arcline/.compiz/session/10b624e5c8f98c5325134625607758338300000051770001" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done (compiz:7038): Gtk-WARNING **: Theme parsing error: gnome-panel.css:28:11: Not using units is deprecated. Assuming 'px'. (compiz:7038): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Segmentation fault (core dumped)

    Read the article

  • Uploading.to Uploads Files to Multiple File Hosts Simultaneously

    - by Jason Fitzpatrick
    If you’re looking to quickly share a file across a variety of file hosting services, Uploading.to makes it a cinch to share up to 10 files across 14 hosts. The upload process is simple. Visit Uploading.to, select your files, check the hosts you want to share the file across (by default all 14 are checked), add a description to the collection, and hit the Upload button. Uploading.to will upload your file to the various hosts; during the process you’ll see which hosts are confirmed and which have failed. We had 2 failures among the 14 hosts which still left the file mirrored across a sizable 12 host spread–not bad at all. When you’re ready to share the file hit the Copy Link button at the bottom of the screen and share it with your friends. They’ll be directed to Uploading.to and will be able to select from any of the hosts the file was successfully mirrored across. Uploading.to is a free service and requires no registration. Uploading.to [via Addictive Tips] HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • HTML Parsing for multiple input files using java code [closed]

    - by mkp
    FileReader f0 = new FileReader("123.html"); StringBuilder sb = new StringBuilder(); BufferedReader br = new BufferedReader(f0); while((temp1=br.readLine())!=null) { sb.append(temp1); } String para = sb.toString().replaceAll("<br>","\n"); String textonly = Jsoup.parse(para).text(); System.out.println(textonly); FileWriter f1=new FileWriter("123.txt"); char buf1[] = new char[textonly.length()]; textonly.getChars(0,textonly.length(),buf1,0); for(i=0;i<buf1.length;i++) { if(buf1[i]=='\n') f1.write("\r\n"); f1.write(buf1[i]); } I've this code but it is taking only one file at a time. I want to select multiple files. I've 2000 files and I've given them numbering name from 1 to 2000 as "1.html". So I want to give for loop like for(i=1;i<=2000;i++) and after executing separate txt file should be generated.

    Read the article

  • Apache htaccess results in files being downloaded instead of displayed

    - by chrissik
    So I had this "beautiful" website that did exactly what I wanted it to do. Then I shut down my PC, reboot and...the pages just download now instead of being displayed. I re-installed XAMPP and launched Apache again and I was able to identify the .htaccess file as the cause of the problem. Options +FollowSymlinks RewriteEngine on RewriteCond %{QUERY_STRING} !^desktop RewriteCond %{HTTP_USER_AGENT} "android|blackberry|googlebot-mobile|iemobile|iphone|ipod|#opera mobile|palmos|webos" [NC] RewriteRule ^/?$ /mobile/index [L,R=302] RewriteRule ^/?$ /de/index [R] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(.*)$ $1.html Here is the problem I guess: RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.html -f RewriteRule ^(.*)$ $1.html This should make it possible to use /de/index instead of /de/index.html - but somehow it causes the page to download if I open localhost/de/index (but with localhost/de/index.html it works fine...). I'm using HTML Sites with SSI Elements on a Apache web server. The only other file that is different to the out-of-the-box ones is the httpd.conf, where I enabled SSI: AddType text/html .shtml AddHandler server-parsed .shtml AddHandler server-parsed .html AddHandler server-parsed .htm Options Indexes FollowSymLinks Includes AddOutputFilter INCLUDES .shtml Options +Includes So I hope there is somebody among you that can help me with this annoying problem as I'm quite desperate... for some reason, even without the problematic lines Chrome keeps downloading the files (even if I delete the .htaccess file), while IE and Opera display the pages. Edit: Now Opera also wants to download files (whether index.html or index are called).

    Read the article

  • pdflatex reads .eps files saved in OS/X, but not in Ubuntu

    - by David B Borenstein
    Sorry if this is a stupid question; I'm a newbie. I am preparing a manuscript in LaTeX. The journal (Physical Biology, an IOP publication) requires that figures be saved in .eps format, so I am trying to do that. However, I cannot get my LaTeX file to build when I have generated the .eps files on my Ubuntu computer. If I save the images on my Mac, the file build just fine. So far, I have tried saving images in ImageJ, FIJI and Inkscape. The same problem occurs in all three. When using kile, I get the following error: /usr/share/texmf-texlive/tex/latex/oberdiek/epstopdf-base.sty:0: Shell escape feature is not enabled. In TexWorks, the error is different, but still there: Package pdftex.def Error: File `./figures4/figure4a-eps-converted-to.pdf' not found. Now, if I fire up Inkscape, FIJI or ImageJ on OS/X, everything works fine. The Mac also can't build with the Ubuntu-saved images. The images generated on the Ubuntu machine open fine using Document Viewer. I am building the same LaTeX file on both computers, with the exact same results. The header of my LaTeX file is: \documentclass[12pt]{iopart} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{parskip} \usepackage{color} \usepackage{iopams} And then the code for the figure is: \begin{figure} \center{\includegraphics[width=4in] {./figures4/figure4a.eps}} \footnotesize{\caption{ \label{fig:4a} (4a) lorem ipsum dolor sic amet.}} \end{figure} I'd be happy to send an example of both .eps files. Again, sorry if this is a dumb question. I tried everything I could think of before posting here. Thanks, David

    Read the article

  • How do I load tmx files with Slick2d?

    - by mbreen
    I just started using Slick2D and learned how simple it is to load in a tilemap and display it. I tried atleast a dozen different tmx files from numerous examples to see if it was the actual file that was corrupted. Everytime I get this error: Exception in thread "main" java.lang.RuntimeException: Resource not found: data/maps/desert.tmx at org.newdawn.slick.util.ResourceLoader.getResourceAsStream(ResourceLoader.java:69) at org.newdawn.slick.tiled.TiledMap.<init>(TiledMap.java:101) at game.Game.init(Game.java:17) at game.Tunneler.initStatesList(Tunneler.java:37) at org.newdawn.slick.state.StateBasedGame.init(StateBasedGame.java:164) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:390) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at game.Tunneler.main(Tunneler.java:29) Here is my Game class: package game; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import org.newdawn.slick.tiled.TiledMap; public class Game extends BasicGameState{ private int stateID = -1; private TiledMap map = null; public Game(int stateID){ this.stateID = stateID; } public void init(GameContainer container, StateBasedGame game) throws SlickException{ map = new TiledMap("data/maps/desert.tmx","maps");//ERROR } public void render(GameContainer container, StateBasedGame game, Graphics g) throws SlickException{ //map.render(0,0); } public void update(GameContainer container, StateBasedGame game, int delta) throws SlickException{ } public int getID(){return stateID;} } I've tried to see if anyone else has had similar problems but haven't turned up anything. I am able to load other files, so I don't believe it's a compiler issue. My menu class can load images and display them just fine. Also, the filepath is correct. Please let me know if you have any pointers that might help me sort this out.

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Copying files to Truecrypt file container hangs

    - by Wagner Maestrelli
    I have a dual boot installation with Windows 7 Ultimate (32-bits, NTFS file sytem) and Ubuntu 10.10 (32-bits, ext4 file system). I have installed the version 7.0a of Truecrypt in both Operating Systems. Located in the Windows 7 HDD I have a 150 GB encrypted file container. It is a standard and dynamic file container, which means it's not hidden and uses a sparse file. This file was created using the Windows version of the Truecrypt program. When I logon in Windows the container is mounted as the drive E: and everything works fine! In Ubuntu the Windows's NTFS file system is automaticaly mounted after I logon. I've configured that using the ntfs-config package. In my ~/.profile I have this line to mount the truecrypt's file container: truecrypt /media/7EDEBCFADEBCABB1/Users/Wagner/hd/hd.tc /media/truecrypt1 The file container is mounted after the logon without any problem. I can access it, copy files to/from it, etc. But when I try do copy relatively large amounts of data (~50 MB) to it via nautilus or cp -R, it starts the copy, copies some data until certain point and then it just hangs! The progress bar does not move anymore and nothing happens. There is no error, it just hangs and that's it. I have to kill the process myself. This problem does not happen in Windows: I can copy very large amounts of data to the container and it works great. But in Ubuntu the problem always happens! I mean, whenever I try to copy a bunch of files together the copy process hangs. Does anyone ever faced this problem? Can anyone help? Thanks!

    Read the article

  • Search inside Xournal files (.xoj)

    - by Javad Sadeqzadeh
    I'm a big fan of Evernote, I use it regularly. However, it has a 60MB storage limit (although text files are not going to occupy much space, but the limitation concern still remains). Today, I installed Xournal, which has great features like annotating, nice background, free hand shapes and notes, save in PDF format, and many more. But the big problem is that as far as I've noticd, there is no intrinsic feature for seach inside the notes (created using Xournal with .xoj suffix). I used Catfish File Search application (which creates bash commands for full text search), but it couldn't help as well. Is there anyway to search inside a .xoj file at all? If so, it could be a suitable alternative to evernote, if you put your .xoj files on a cloud (which certainly offers you much more storage space than 60MB). If not, is there any other convenient app similar to Evernote, but with higher storage limit or without a limit? Somebody suggested Zim desktop wiki app, which looks great, but I'm nut sure if I could copy and paste everything there (a mixture of photos and tables and text with various formats and highlights), like what I do with Evernote. And a very useful tool I use is Evernote Web Clipper (browser extension). Of course, having a desktop client like Everpad is a plus, but not the absolute need. PS: I use pocket, so please do suggest that (it only preserve links (which might change over time) not the actual text). I also use google drive or docs, I don't like that for this purpose niether, it's too slow, doesn't have a browser extension and a desktop client. Thank you so much in advance.

    Read the article

  • Removing specific part of filename (what's after the second dash) for all files in a folder

    - by Bodo
    I use the command line utility youtube-dl to download videos from YouTube and make mp3s from them with avconv. I'm doing this under Ubuntu 14.04 and very happy with it. The utility downloads the files and saves them with the following name scheme: TITLE(artist-track)-ID.mp3 So an actual filename looks like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Some other file names in the folder look like: EPIC RAP BATTLE of MANLINESS-_EzDRpkfaO4.mp3 Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 Stromae - Papaoutai-oiKj0Z_Xnjc.mp3 At first, this was no problem. It didn't bother me while listening to my music in Rhytmbox. But when moving to phone or other devices it is pretty confusing to see a so long name, and some players, like the Samsung ones, treat that last part (id after second dash) of the name as Album or something. I'd like to create a bash script that removes what's after the second dash in the name for all files, so it'll make them like this: From: Martin Garrix - Animals (Official Video)-gCYcHz2k5x0.mp3 To: Martin Garrix - Animals (Official Video).mp3 Is it also possible to instruct youtube-dl to exclude the ID from now on? I am currently downloading with the command: youtube-dl --extract-audio --audio-quality 0 --audio-format mp3 URL

    Read the article

  • Lost files after installing Ubuntu

    - by Joshua Rosato
    I installed Ubuntu on my laptop over windows, I had 2 partitions on one hard disk. It seems like my second partition is gone with all my files. How can I recover the old files? They weren't on the same partition as Windows. I read that the partition has probably just not been mounted so ran sudo fdisk -l to find all the partitions and then ran sudo mount, however I can't tell from the results of sudo mount what is not mounted and I am also unsure how to mount it once I find the unmounted partition. sudo fdisk -l - Results Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002c6dc Device Boot Start End Blocks Id System /dev/sda1 * 2048 486322175 243160064 83 Linux /dev/sda2 486324222 488396799 1036289 5 Extended /dev/sda5 486324224 488396799 1036288 82 Linux swap / Solaris sudo mount - Results /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) none on /sys/fs/pstore type pstore (rw) systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=joshy1)

    Read the article

  • Web browser downloads only open target folders - cannot open files

    - by Pavlos G.
    After installing xubuntu packages in order to check xfce, I reverted back to gnome2. During the first login, I noticed that thunar was now selected as the default file manager. Preferred applications menu is also missing now, so I could not set nautilus as the default. I removed all the xubuntu packages (including thunar) and then when I tried to open a folder, I was asked to select the default file manager - that's how I got nautilus back. The next problem I'm now facing has to do with the downloaded files from web browsers: Open and Open containing folder options produce exactly the same result. If I double-click on a file, it'll just open the containing folder, instead of opening the file with it's associated application (e.g. libreoffice writer for .doc,.odt, smplayer for .avi,.wmv, etc). The problem happens both in Firefox and Chrome. Through nautilus, all files open correctly. Up until now I've tried the following: Delete/recreate mimeTypes.rdf in my FF profile Create a new profile in FF Delete/recreate ~/.local/share/applications/mimeapps.list Already checked this similar article None of them worked. Any ideas on the issue would be appreciated.

    Read the article

  • Keeping files that are often changed in sync between desktop and laptop

    - by N.N.
    I'm looking for a way to keep a desktop and a laptop in sync. What I want to keep in sync are some folders, mainly ~/Documents, that are changed often when working on them. If it matters I can connect to my desktop from anywhere via an URL but my laptop is harder to access since it might be behind NAT and such. I have been looking at Ubuntu One but it seems to not go well with working on documents written in LaTeX. If I work on a .tex file in the Ubuntu One directory and compile it (with pdflatex) every now and then (as often as every 10 sec when working) it will create several new files including a pdf which are uploaded to Ubuntu One and this seems stupid since it will create continuous upload when working on .tex files. I also usually keep .tex documents version controlled by git and then every commit (which also can happen frequently) will cause upload (by changes in ./.git) so that it happens continuously when working. Another example is editing images that are saved often. What I think would be best is for sync to happen every tenth minute or at the end of every working session (but there might be some other way to handle this?).

    Read the article

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • FTP gives me a error when uploading and deleting files [on hold]

    - by AR Games
    Here's the error I get when trying to delete files... Command: DELE index.html Response: 550 Delete operation failed. Here's the error I get when trying to upload files... Command: OPTS UTF8 ON Response: 200 Always in UTF8 mode. Status: Connected Status: Starting upload of C:\wamp\www\.DS_Store Command: CWD /var/www/html Response: 250 Directory successfully changed. Command: TYPE A Response: 200 Switching to ASCII mode. Command: PASV Response: 227 Entering Passive Mode (76,185,76,101,78,222). Command: STOR .DS_Store Response: 553 Could not create file. Error: Critical file transfer error Status: Retrieving directory listing... Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Response: 227 Entering Passive Mode (76,185,76,101,23,94). Command: LIST Response: 150 Here comes the directory listing. Response: 226 Directory send OK. Status: Directory listing successful Response: 421 Timeout. Error: Connection closed by server Status: Disconnected from server IM running windows OS and using filezilla FTP client

    Read the article

  • Convert uploaded video files to mp4 using PHP [closed]

    - by Subin
    I created a PHP video uploading script. I need to convert these files to mp4 for HTML5 VIDEO PLAYER using PHP while uploading . How can I do that ? Here is the PHP code. <?php if(isset($_POST['submit'])){ $user=$_COOKIE['VisitorName']; include('config.php'); session_start(); $session_id='1'; //$session id $path = "/home/simsu/subins/videos/data/videos/"; $valid_formats = array("wmv", "ogv", "mp4", "3gp", "ogg"); if(isset($_POST) and $_SERVER['REQUEST_METHOD'] == "POST") { $name = $_FILES['uploadedfile']['name']; $size = $_FILES['uploadedfile']['size']; if(strlen($name)) { list($txt, $ext) = explode(".", $name); if(in_array($ext,$valid_formats)) { if($size<(100024*100024)) { $actual_image_name = $path.time().".mp4"; $tmp = $_FILES['uploadedfile']['tmp_name']; $upurl="http://vtube.subins.com/files/video?vid=".time(); $title=$_POST['vn']; mysql_query("INSERT INTO videos(title,user,url,vid,ext) VALUES ('$title', '$user','$upurl',NOW(),'$ext')"); echo '<br><h1>'.$_FILES['uploadedfile']['name'] . " uploaded.</h1>"; } else echo "<br><h1>Video file size max 100 MB"; } else echo "<br><h1>Invalid file format.."; } else echo "<br><h1>Please select a video..!"; exit; } } ?>

    Read the article

  • accessing files on a shared folder via IIS

    - by Darkcat Studios
    Im not sure if this suits stackexchange, serverfault or here... so i'll go with here for a start. I'm having issues setting up a network share to be accessed by IIS, all I need to do is read/write files on the Other server. We have 2 servers set up (Both 2008 R2 & IIS 7.5), one is the WEB server, which is externally accessible and NOT part of the domain. We also have an Intranet server which has no internet connectivity and is part of the domain. These 2 servers can talk to each other happily, I have the SQL server on the WEB server shared across to the intranet server so that the web content is editable from the intranet. I can share a folder on the web server (say, wwwroot/Images/) and connect to it from the intranet server, even have it as a mapped drive (but i know thats not going to work for IIS to access it), So there seems not to be a connectivity issue. I can also set up a Virtual folder in IIS on the Intranet server - this is where it gets annoying - I cant connect using pass-through authentication because there is no suitable user on the web server (which is not on the domain). If i set up a user on the web server, eg Intranet_USR, and give it appropriate rights to the folder, files and share, i can connect, but only view folder contents in IIS, not read, although that user has read privileges!! Any help much appreciated!

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >