Search Results

Search found 74849 results on 2994 pages for 'file folder'.

Page 193/2994 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Can't mark email read with InterIMAP, folder is read-only

    - by Morri
    I'm trying to mark emails read (/SEEN) with InterIMAP, but this doesn't work. I stepped through the code with debugger, and found out that the response from mail server is "IMAP0078 OK Store ignored with read-only mailbox.", which pretty much tells me why it doesn't work. But it looks like there's no way to tell InterIMAP to open the connection as read-write. If I use something like Thunderbird, I can set the messages as read. Does anyone know how I should use InterIMAP to achieve what I'm trying, or how to change the source code so that I'd be able to mark messages as read?

    Read the article

  • Deleting a single file from the trash in Mac OS X Snow Leopard

    - by SteveTheOcean
    In earlier versions of Mac OS X one could delete a file from the trash by opening a terminal window and typing rm ~/.Trash/file_i_want_to_delete. See this previous post. Unlike earlier versions in Mac OS X Snow Leopard one can "put back" a file from the trash into its original directory. Will the rm trick still work? Testing shows it does delete the file but what happens to the "put back" information that specifies the directory from which the file was deleted?

    Read the article

  • Where can I find a full distribution of FAR Manager (plugins bundle)?

    - by Sorin Sbarnea
    FAR Manager is a very powerful open-source file manager for Windows. Most of its qualities are coming from the plugins but the official distribution is missing them. I'm looking for an updated bundle that should contain the most important FAR Manager plugins: WinSCP - SCP/FTP file transfer Colorer - syntax highlighting 7-Zip - support to archive in Zip and 7-Zip and unarchive almost any format Picture View 2 - View pictures using F3, no external viewer needed. ConEMU - for additional console support including resize support.

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • .exe File becomes corrupted when downloaded from server

    - by Kerri
    Firstly: I'm a lowly web designer who knows just enough PHP to be dangerous and just enough about server administration to be, well, nothing. I probably won't understand you unless you're very clear! The setup: I've set up a website where the client uploads files to a specific directory, and those files are made available, through php, for download by users. The files are generally executable files over 50MB. The client does not want them zipped, as they feel their users aren't savvy enough to unzip them. I'm using the php below to force a download dialogue box and hide the directory where the files are located. It's Linux server, if that makes a difference. The problem: There is a certain file that becomes corrupt after the user tries to download it. It is an executable file, but when it's clicked on, a blank DOS window opens up. The original file, prior to download opens perfectly. There are several other similar files that go through the same exact download procedure, and all of those work just fine. Things I've tried: I've tried uploading the file zipped, then unzipping it on the server to make sure it wasn't becoming corrupt during upload, and no luck. I've also compared the binary code of the original file to the downloaded file that doesn't work, and their exactly the same (so the php isn't accidentally inserting anything extra into the file). Could it be an issue with the headers in my downloadFile function? I really am not sure how to troubleshoot this one… This is the download php, if it's relevant ($filenamereplace is defined elsewhere): downloadFile("../DOWNLOADS/files/$filenamereplace","$filenamereplace"); function downloadFile($file,$filename){ if(file_exists($file)) { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$filename.'"'); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); header('Content-Length: ' . filesize($file)); @ flush(); readfile($file); exit; } }

    Read the article

  • fedora tomcat log file path

    - by Kamil
    My log file is inside: kamil@localhost tomcat$ grep "logs/" ./* ./log4j.properties:log4j.appender.R.File=${catalina.home}/logs/tomcat.log my CATALINA_HOME is kamil@localhost tomcat$ sudo grep "CATALINA" ./* ... ./tomcat.conf:CATALINA_HOME="/usr/share/tomcat" that above suggests that my log file is hare, and there it's: kamil@localhost tomcat$ sudo ls /usr/share/tomcat/logs/ | grep .out catalina.out So why can't I start server: kamil@localhost tomcat$ sudo tomcat start /usr/sbin/tomcat: line 30: /logs/catalina.out: No such file or directory

    Read the article

  • How to overwrite the data in a file with bash

    - by Stefan Liebenberg
    I'm writing a bash script that encrypts the data of a folder or file #!/bin/bash file_name=$1 tmp_file=/tmp/tmpfile.tar # tar compress file tar -cf $tmp_file $file_name; # encrypt file gpg -c $tmp_file # remove temp file rm -rf $tmp_file $file_name # mv encrypted file to orignal place mv ${tmp_file}.gpg $file_name but the data will still be recoverable by using photorec or similar methods... Is there a way to ensure the absolute deletion of the original file in bash? Thank You Stefan

    Read the article

  • Choose Default Program does not work (is broken) on Windows

    - by Piotr Dobrogost
    For some time now when I click Open with...|Choose Default Program from Windows Explorer's context menu I'm getting this error This file does not have a program associated with it to perform this action. Create an association in the Set Associations control panel. I'm getting this error no matter what the extension of the selected file is. Any ideas how to fix this?

    Read the article

  • Project and Business Document Organization

    - by dassouki
    How do you organize, maintain edits, revisions and the relationship between: Proposals Contracts Change Orders Deliverables Projects How do you organize your projects for re-usability? For example, is there a way to add tags to projects, to make them more accessible? What's a good data structure to dump all my files on an internet server for easy access? Presently, my work folder is setup as follows: (1)/work/ (2)/projects (3)/project_a (4)/final (which includes all final documents) (5)/contracts (5)/rfp_rfq (5)/change_orders (5)/communications (logs all emails, faxes, and meeting notes and minutes) (5)/financial (6)/paid (6)/unpaid (5)/reports (4)/old (include all documents that didn't make it into the project_a/final/ (3)/project_b (4) ... same as above ... (2)/references (3)/technical_references (3)/gov_regulations (3)/data_sources (3)/books (3)/topic_based (each area of my expertise has a folder with references in them) (2)/business_contacts (3)/contacts.xls (file contains all my contacts) (2)/banking (3)/banking.xls (contains a list of all paid and unpaid invoices as well as some cool stats) (3)/quicken (to do my taxes and yada yada) (4)/year (2)/education (courses I've taken (3)/webinars (3)/seminars (3)/online_courses (2)/publications (includes the publications I've made (3)/publication_id We're mostly 5 people working together part-time on this thing. Since this is a very structured approach, I find it really difficult to remember what I've done on previous projects and go back and forth easily. What are your suggestions on improving my processes? I'm open to closed and open source software (as long as the price isn't too high). I also want to implement a system where I can save most of the projects online to increase collaboration and efficiency and reduce bandwidth especially on document editing. Imagine emailing a document back and forth 5-10 times a day.

    Read the article

  • DFS replication and the SYSTEM user (NTFS permissions)

    - by HopelessN00b
    Question for which I'm having trouble finding an answer on Google or Technet... Does granting the SYSTEM user permissions to DFS-shared files and folders have any effect on DFS replication? (And while we're at it, is there any good reason not to let SYSTEM have permissions to DFS-shared files?) It comes up because I have a collection of DFS namespaces and folders that I'm not able to make someone else's problem, and while troubleshooting a problem where one DFS replica just wasn't replicating with another for no discernible reason, I observed that the SYSTEM didn't have any permissions granted to any of the files or folders in the folder in question. So I set SYSTEM to have full control and propagated it down, and our DFS health diagnostic reports went from showing a backlog of ~80 files to a backlog of ~100,000... and things started replicating, including a number of files that had been missing on one server or the other (so more than just the permissions changes started replicating). Naturally, this made me curious as to whether or not DFS needs the SYSTEM account to have permissions to do its work, or if perhaps it was just any change to folder tree in question that prompted DFS to jump into action. If it matters, our DFS namespaces were set up under 2000/2003, and I have just recently finished upgrading all the servers to 2008 R2 or 2012 (with UAC enabled, blech), but have not yet gotten around to raising the DFS namespace functional levels to Server 2008. (And bonus points if anyone has an official Microsoft article on NTFS file permissions and the SYSTEM account as it pertains to DFS or network files.)

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • Kohana3 Route with Subdirectories in Controller Folder

    - by ahmet2106
    Hello everybody I'm using Kohana for so long, but the new Version is a litte bit other. My problem with an example: classes/ controller/ pages.php pages/ imprint.php I want to change my Route so, that if i call domain.tld/pages, the pages.php should called with action_index(). But if i call domain.tld/pages/imprint, pages/imprint.php should called with action_index() (and this should work too: pages/imprint/demo - action_demo() in pages/imprint.php) Tried this with bootstrap and http://kohanaframework.org/guide/api/Route this examples, but cant get it work. How can i make this? Any help? Thanks Closed: was my fault The Route::set('pages'...) must be before Route::set('default'...);

    Read the article

  • Utility that helps in file locking - expert tips wanted

    - by maix
    I've written a subclass of file that a) provides methods to conveniently lock it (using fcntl, so it only supports unix, which is however OK for me atm) and b) when reading or writing asserts that the file is appropriately locked. Now I'm not an expert at such stuff (I've just read one paper [de] about it) and would appreciate some feedback: Is it secure, are there race conditions, are there other things that could be done better … Here is the code: from fcntl import flock, LOCK_EX, LOCK_SH, LOCK_UN, LOCK_NB class LockedFile(file): """ A wrapper around `file` providing locking. Requires a shared lock to read and a exclusive lock to write. Main differences: * Additional methods: lock_ex, lock_sh, unlock * Refuse to read when not locked, refuse to write when not locked exclusivly. * mode cannot be `w` since then the file would be truncated before it could be locked. You have to lock the file yourself, it won't be done for you implicitly. Only you know what lock you need. Example usage:: def get_config(): f = LockedFile(CONFIG_FILENAME, 'r') f.lock_sh() config = parse_ini(f.read()) f.close() def set_config(key, value): f = LockedFile(CONFIG_FILENAME, 'r+') f.lock_ex() config = parse_ini(f.read()) config[key] = value f.truncate() f.write(make_ini(config)) f.close() """ def __init__(self, name, mode='r', *args, **kwargs): if 'w' in mode: raise ValueError('Cannot open file in `w` mode') super(LockedFile, self).__init__(name, mode, *args, **kwargs) self.locked = None def lock_sh(self, **kwargs): """ Acquire a shared lock on the file. If the file is already locked exclusively, do nothing. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ if self.locked == 'ex': return # would implicitly remove the exclusive lock return self._lock(LOCK_SH, **kwargs) def lock_ex(self, **kwargs): """ Acquire an exclusive lock on the file. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ return self._lock(LOCK_EX, **kwargs) def unlock(self): """ Release all locks on the file. Flushes if there was an exclusive lock. :returns: Lock status from before the call (one of 'sh', 'ex', None). """ if self.locked == 'ex': self.flush() return self._lock(LOCK_UN) def _lock(self, mode, nonblocking=False): flock(self, mode | bool(nonblocking) * LOCK_NB) before = self.locked self.locked = {LOCK_SH: 'sh', LOCK_EX: 'ex', LOCK_UN: None}[mode] return before def _assert_read_lock(self): assert self.locked, "File is not locked" def _assert_write_lock(self): assert self.locked == 'ex', "File is not locked exclusively" def read(self, *args): self._assert_read_lock() return super(LockedFile, self).read(*args) def readline(self, *args): self._assert_read_lock() return super(LockedFile, self).readline(*args) def readlines(self, *args): self._assert_read_lock() return super(LockedFile, self).readlines(*args) def xreadlines(self, *args): self._assert_read_lock() return super(LockedFile, self).xreadlines(*args) def __iter__(self): self._assert_read_lock() return super(LockedFile, self).__iter__() def next(self): self._assert_read_lock() return super(LockedFile, self).next() def write(self, *args): self._assert_write_lock() return super(LockedFile, self).write(*args) def writelines(self, *args): self._assert_write_lock() return super(LockedFile, self).writelines(*args) def flush(self): self._assert_write_lock() return super(LockedFile, self).flush() def truncate(self, *args): self._assert_write_lock() return super(LockedFile, self).truncate(*args) def close(self): self.unlock() return super(LockedFile, self).close() (the example in the docstring is also my current use case for this) Thanks for having read until down here, and possibly even answering :)

    Read the article

  • Viewing large text file in a browser

    - by MeLight
    Hi, I need to write a text file viewer (not the directory tree, but the actual file contents) for use in a browser. It will be used to view large files. I want to give the user the ability to actually ummm, browse the file, ie prev page & next page buttons, while each page will show only a portion of the file. Two question: Is there anyway to pass the file descriptor through POST (or something) so that on each page I can keep reading from an already open file, and not starting all over again (again - huge files) Is there a way to read the file backwards? Will be very useful for browsing back in a file. Any other implementation ideas are very welcome. Thanks

    Read the article

  • Best way to send large files point-to-point?

    - by Adam S
    I'm looking for a way to send a 10GB file to a friend. I really need to send it over the internet, but e-mail or uploading sites are not really an option. I remember using MSN messenger and having a file transfer feature that worked decently well. However, my friend doesn't have this software and doesn't want to get it. I know that the professional versions of TeamViewer have such a feature, but are there any free alternatives?

    Read the article

  • Delphi: Alternative to using Assign/ReadLn for text file reading

    - by Ian Boyd
    i want to process a text file line by line. In the olden days i loaded the file into a StringList: slFile := TStringList.Create(); slFile.LoadFromFile(filename); for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; Problem with that is once the file gets to be a few hundred megabytes, i have to allocate a huge chunk of memory; when really i only need enough memory to hold one line at a time. (Plus, you can't really indicate progress when you the system is locked up loading the file in step 1). The i tried using the native, and recommended, file I/O routines provided by Delphi: var f: TextFile; begin Assign(filename, f); while ReadLn(f, oneLine) do begin //process the line end; Problem withAssign is that there is no option to read the file without locking (i.e. fmShareDenyNone). The former stringlist example doesn't support no-lock either, unless you change it to LoadFromStream: slFile := TStringList.Create; stream := TFileStream.Create(filename, fmOpenRead or fmShareDenyNone); slFile.LoadFromStream(stream); stream.Free; for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; So now even though i've gained no locks being held, i'm back to loading the entire file into memory. Is there some alternative to Assign/ReadLn, where i can read a file line-by-line, without taking a sharing lock? i'd rather not get directly into Win32 CreateFile/ReadFile, and having to deal with allocating buffers and detecting CR, LF, CRLF's. i thought about memory mapped files, but there's the difficulty if the entire file doesn't fit (map) into virtual memory, and having to maps views (pieces) of the file at a time. Starts to get ugly. i just want Assign with fmShareDenyNone!

    Read the article

  • How to view .docx and .doc file on mac os x10.6.5

    - by Harri
    I want to view .docx file and .doc file in my mac os x10.6.5.When i open this type of file it shows only text in text editor.The docx file has some more images i didnt able to see. Is there any default application in my mac to view these two files.Are i want to download some application to view those files ?If yes means what are the application really need for these files ? Can anyone help me ? Thanks in advance...

    Read the article

  • processing an audio wav file with C

    - by sa125
    Hi - I'm working on processing the amplitude of a wav file and scaling it by some decimal factor. I'm trying to wrap my head around how to read and re-write the file in a memory-efficient way while also trying to tackle the nuances of the language (I'm new to C). The file can be in either an 8- or 16-bit format. The way I thought of doing this is by first reading the header data into some pre-defined struct, and then processing the actual data in a loop where I'll read a chunk of data into a buffer, do whatever is needed to it, and then write it to the output. #include <stdio.h> #include <stdlib.h> typedef struct header { char chunk_id[4]; int chunk_size; char format[4]; char subchunk1_id[4]; int subchunk1_size; short int audio_format; short int num_channels; int sample_rate; int byte_rate; short int block_align; short int bits_per_sample; short int extra_param_size; char subchunk2_id[4]; int subchunk2_size; } header; typedef struct header* header_p; void scale_wav_file(char * input, float factor, int is_8bit) { FILE * infile = fopen(input, "rb"); FILE * outfile = fopen("outfile.wav", "wb"); int BUFSIZE = 4000, i, MAX_8BIT_AMP = 255, MAX_16BIT_AMP = 32678; // used for processing 8-bit file unsigned char inbuff8[BUFSIZE], outbuff8[BUFSIZE]; // used for processing 16-bit file short int inbuff16[BUFSIZE], outbuff16[BUFSIZE]; // header_p points to a header struct that contains the file's metadata fields header_p meta = (header_p)malloc(sizeof(header)); if (infile) { // read and write header data fread(meta, 1, sizeof(header), infile); fwrite(meta, 1, sizeof(meta), outfile); while (!feof(infile)) { if (is_8bit) { fread(inbuff8, 1, BUFSIZE, infile); } else { fread(inbuff16, 1, BUFSIZE, infile); } // scale amplitude for 8/16 bits for (i=0; i < BUFSIZE; ++i) { if (is_8bit) { outbuff8[i] = factor * inbuff8[i]; if ((int)outbuff8[i] > MAX_8BIT_AMP) { outbuff8[i] = MAX_8BIT_AMP; } } else { outbuff16[i] = factor * inbuff16[i]; if ((int)outbuff16[i] > MAX_16BIT_AMP) { outbuff16[i] = MAX_16BIT_AMP; } else if ((int)outbuff16[i] < -MAX_16BIT_AMP) { outbuff16[i] = -MAX_16BIT_AMP; } } } // write to output file for 8/16 bit if (is_8bit) { fwrite(outbuff8, 1, BUFSIZE, outfile); } else { fwrite(outbuff16, 1, BUFSIZE, outfile); } } } // cleanup if (infile) { fclose(infile); } if (outfile) { fclose(outfile); } if (meta) { free(meta); } } int main (int argc, char const *argv[]) { char infile[] = "file.wav"; float factor = 0.5; scale_wav_file(infile, factor, 0); return 0; } I'm getting differing file sizes at the end (by 1k or so, for a 40Mb file), and I suspect this is due to the fact that I'm writing an entire buffer to the output, even though the file may have terminated before filling the entire buffer size. Also, the output file is messed up - won't play or open - so I'm probably doing the whole thing wrong. Any tips on where I'm messing up will be great. Thanks!

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • How to save file using JFileChooser??

    - by Lokesh Kumar
    Hi, I have a method in my application called "Save as" which Saves the image of my application on computer my into a file. I used the JFileChooser to let the users choose their desired location for saving the file. The problem is unless user explicitly types in the file format, it saves the file with no extension. How can I have formats like jpg, png in the File Type drop down menu. and, how can i get extension from the File Type drop menu for saving my image file. ImageIO.write(image,extension,file);

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • "Go to file" feature in various editors

    - by hekevintran
    In TextMate there is a feature called "Go to file" that is used for file navigation. It is a box where you type the name of a file in your project and it will use fuzzy matching to generate a list of candidates files from which you can select. Other editors have this feature, but they each give it a different name: Vim fuzzyfinder Emacs fuzzy-find-in-project TextMate Go to file (fuzzy) Eclipse OpenResource (not fuzzy) Eclipse GotoFile (fuzzy) Komodo Go to File (not fuzzy) Netbeans Go to file (not fuzzy) Does jEdit, Geany, or Ultraedit have this feature?

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >