Search Results

Search found 39784 results on 1592 pages for 'ignore files'.

Page 132/1592 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Is it possible to run multiple mongod instances on a single set of database files

    - by 9point6
    We have large multi-gigabyte data sets on which we run very complex queries, for example { $or: [ { id: 30000001, ... }, { id: 30000005, ... }, ..., { id: 30001005, ... } ] } It seems that CPU is actually a bottleneck at this point, so I'd be advantageous to be able to run multiple mongod instances on the same set of database files. We've considered using replica sets to this end, but would prefer to not require the extra disk space simply for CPU reasons.

    Read the article

  • "Hide file names" for all files? Windows 7

    - by Saebin
    So, I just discovered that you can hide the filenames of pictures and videos when you are in a thumbnail view in explorer (via View - Hide file names)... but often I have other files mixed in. This causes the thumbnails to be all spaced out. How can I hide all file names (maybe folders too)?

    Read the article

  • FAT32 4 GB+ files

    - by zm15
    I'm having the problem of needing a hard drive that can be written to by both a Mac and a PC. I have found that FAT32 might be an option, but as a video editor I often deal with files over the 4 GB limit. And since Mac doesn't read NTFS (very well with third-party programs) I'm considering FAT32. I'm curious, what happens when you try to write a file that is over 4 GB? What is a good way around this?

    Read the article

  • Firefox trying to download local .swf files

    - by Levans
    I'm quite annoyed with my firefox and Flash files : When I try opening a .swf file with it: If the file is on the web (via http://...), it plays normally in browser If the file is local (via file:///...), firefox only show me a dialog to download it It tried opening a web swf file, downloading it then opening it locally, it's the same. So I guess it's a firefox problem. I'm on Gentoo Linux, and it started today, without any apparent reason.

    Read the article

  • HOw to make the user the ownder of new files

    - by Master
    I have VPS with CentOS 5.4. This is production server. The problem is when my webiste installs some scripts from site , then the owner name is not written on the file and as user i can even change the file permissions. Is there any way that which ever script is writing file , the owner of home directory should the owner of all the files

    Read the article

  • HOw to make the user the owner of new files

    - by Master
    I have VPS with CentOS 5.4. This is production server. The problem is when my webiste installs some scripts from site , then the owner name is not written on the file and as user i can even change the file permissions. Is there any way that which ever script is writing file , the owner of home directory should the owner of all the files

    Read the article

  • Mistakenly deleted files on my windows os

    - by Joshua. O
    I was trying to install another OS (ubuntu) on my laptop which already had a windows 7 installation. During the Ubuntu installation, I mistakenly clicked on the first option which was to erase the disk and install. It was only after the installation, that I realized the mistake. My question is how can I recover my original files from the windows installation even though I accidentally reformatted and installed Ubuntu?

    Read the article

  • Extract audio files from powerpoint

    - by curious2know
    I recorded a some audio files on my powerpoint presentation. This was done in two ways: (1) for some slides I used the record narration feature of powerpoint (the audio on each slide was recorded separately) and, (2) for others I used audacity to record the audio, which I imported into powerpoint. I need to extract the audio file from each slide. I need to send just the audiofiles to someone. Is there a way I can extract the audiofiles? Thanks

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • Problems with jQuery getJSON using local files in Chrome

    - by Tauren
    I have a very simple test page that uses XHR requests with jQuery's $.getJSON and $.ajax methods. The same page works in some situations and not in others. Specificially, it doesn't work in Chrome on Ubuntu. I'm testing on Ubuntu 9.10 with Chrome 5.0.342.7 beta and Mac OSX 10.6.2 with Chrome 5.0.307.9 beta. It works correctly when files are installed on a web server from both Ubuntu/Chrome and Mac/Chrome (try it out here). It works correctly when files are installed on local hard drive in Mac/Chrome (accessed with file:///...). It FAILS when files are installed on local hard drive in Ubuntu/Chrome (access with file:///...). The small set of 3 files can be downloaded in a tar/gzip file from here: http://issues.tauren.com/testjson/testjson.tgz When it works, the Chrome console will say: XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:16Using getJSON index.html:21 Object result: "success" __proto__: Object index.html:22success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:29Using ajax with json dataType index.html:34 Object result: "success" __proto__: Object index.html:35success XHR finished loading: "http://issues.tauren.com/testjson/data.json". index.html:46Using ajax with text dataType index.html:51{"result":"success"} index.html:52undefined When it doesn't work, the Chrome console will show this: index.html:16Using getJSON index.html:21null index.html:22Uncaught TypeError: Cannot read property 'result' of null index.html:29Using ajax with json dataType index.html:34null index.html:35Uncaught TypeError: Cannot read property 'result' of null index.html:46Using ajax with text dataType index.html:51 index.html:52undefined Notice that it doesn't even show the XHR requests, although the success handler is run. I swear this was working previously in Ubuntu/Chrome, and am worried something got messed up. I already uninstalled and reinstalled Chrome, but that didn't help. Can someone try it out locally on your Ubuntu system and tell me if you have any troubles? Note that it seems to be working fine in Firefox.

    Read the article

  • Dreamweaver and GZIP files

    - by Vian Esterhuizen
    Hi, I've recently tried to optimize my site for speed and brandwith. Amongst many other techniques, I've used GZIP on my .css and .js files. Using PuTTY I compressed the files on my site and then used: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{HTTP_USER_AGENT} !Konqueror RewriteCond %{REQUEST_FILENAME}.gz -f RewriteRule ^(.*)\.css$ $1.css.gz [QSA,L] RewriteRule ^(.*)\.js$ $1.js.gz [QSA,L] <FilesMatch \.css\.gz$> ForceType text/css </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript </FilesMatch> </IfModule> <IfModule mod_mime.c> AddEncoding gzip .gz </IfModule> in my .htaccess file so that they get served properly because all my links are without the ".gz". My problem is, I cant work on the GZIP file in Dreamweaver. Is there a plugin or extension of somesort that allows Dreamweaver to temporarily uncompress thses files so it can read them? Or is there a way that I can work on my local copies as regular files, and server side they automatically get compressed when they are uploaded. Or is there a different code editor I should be using that would completely get around this? Or a just a different technique to doing this? I hope this question makes sense, Thanks

    Read the article

  • what is the best way to stream a audio file to website users/listners

    - by Naveen Chamikara Gamage
    I'm developing a music site which will stream audio files stored in a server to users, audio files will be played through flash player placed in a webpage.. As I heard I need to use a streaming media server for streaming audio files ( like 2mb to 3mb in size).. Do I need to use one? I found some streaming media server softwares like http://www.icecast.org - but as in their documentation, It is used for streaming radio stations and live streaming purposes, but I just need to stream audio files faster and in low size (low bandwidth) with good quality.. I heard I need to encode the audio files first and then send them to listeners and in their end audio files need to be decoded again. Is that true? How can I do that? if I need to use a special web server, where should I host my files? Any good hosting providers? if I host audio files in a normal web server, they will use HTTP or TCP to deliver my audio files to users/ listners but I found that HTTP and TCP are not good ways to use for multi media purposes like streaming audio and video files, and they are used for delivering HTML and stuff. I found I should use RSTP or UDP for streaming audio files.. What should I use? I know that .MP3 files has much better quality than the other formats but it also gives huge size to the audio files.. which format should I use for audio files? Most of the best quality audio files are more than 7mb so I'm planning to convert them my self using a software so I could get low size files with some level of good quality. If I'm converting my audio files what is the good BITRATE I should use for my files? Any known best softwares for converting audio files while keeping quality in a good level? Note** - I know that I will not need complex requirements at the beginning of the site but I wanted to what are the best ways like they are using for soundcloud.com

    Read the article

  • Locking issues with replacing files on a website

    - by Moe Sisko
    I want to replace existing files on an IIS website with updated versions. Say these files are large pdf documents, which can be accessed via hyperlinks. The site is up 24x7, so I'm concerned about locking issues when a file is being updated at exactly the same time that someone is trying to read the file. The files are updated using C# code run on the server. I can think of two options for opening the file for writing. Option 1) Open the file for writing, using FileShare.Read : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the document opens up as a blank page. Option 2) Open the file for writing using FileShare.None : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the browser shows an error. In IE 8, you get HTTP 500, "The website cannot display the page", and in Firefox 3.5, you get : "The process cannot access the file because it is being used by another process." The browser behaviour kind of makes sense, and seem reasonable. I guess its highly unlikely that a user will attempt to read a file at exactly the same time you are updating it. It would be nice if somehow, the file update was atomic, like updating a database with SQL wrapped around a transaction. I'm wondering if you guys worry about this sort of thing, and prefer either of the above options, or even have other options of your own for updating files.

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

  • Eclipse does not refresh project files in package explorer view

    - by EugeneP
    Today I see a strange behaviour of Eclipse 3.5.2 for the first time in 3 months. First, when I run a main function, it runs a previously compiled version. Let's say I press Ctrl+F11 in the window with an open java class and existing main function. Usually it rebuilds the class and runs a new version. Today even if there was a compile mistake, it would run fine. So I guess it does not recompile the class. Next, more strangely, if I intentionally make a mistake in the code and Eclipse underlines those lines in red, still the project Explorer does not mark them as containing errors. They remain of grey color if there were not any errors. First I did not know how to solve this problem. I tried to reopen the project, restart Eclipse and finally reboot the OS. After the tenth attempt, after rebooting, Eclipse said that all project's files are "OUT OF SYNC with the file system". When I pressed "Refresh" - F5 on a project's header name in Project Explorer it finally marked all the files with errors as containing errors and running the main function gave the desired result. An hour of my work passed and this happened again , with the other project. All the same. No marking of files as red, running no matter what old version of class with no compile errors. And since Eclipse does not tell that files are out of sync, simply pressing F5 on a project cannot help. What can you suggest?

    Read the article

  • Uncompress OpenOffice files for better storage in version control

    - by Craig McQueen
    I've heard discussion about how OpenOffice (ODF) files are compressed zip files of XML and other data. So making a tiny change to the file can potentially totally change the data, so delta compression doesn't work well in version control systems. I've done basic testing on an OpenOffice file, unzipping it and then rezipping it with zero compression. I used the Linux zip utility for my testing. OpenOffice will still happily open it. So I'm wondering if it's worth developing a small utility to run on ODF files each time just before I commit to version control. Any thoughts on this idea? Possible better alternatives? Secondly, what would be a good and robust way to implement this little utility? Bash shell that calls zip (probably Linux only)? Python? Any gotchas you can think of? Obviously I don't want to accidentally mangle a file, and there are several ways that could happen. Possible gotchas I can think of: Insufficient disk space Some other permissions issue that prevents writing the file or temporary files ODF document is encrypted (probably should just leave these alone; the encryption probably also causes large file changes and thus prevents efficient delta compression)

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • Website --> Add Reference also creates files with pdb extensions

    - by SourceC
    Hello, Q1 Any assemblies stored in Bin directory will automatically be referenced by web application. We can add assembly reference via Website -- Add Reference or simply by copying dll into Bin folder. But I noticed that when we add reference via Website -- Add Reference, that additional files with .pdb extension are placed inside Bin. If these files are also needed, then why does reference still work even if we only place referenced dll into Bin, but not pdb files Q2 It appears that if you add a new item to web project, this class will automatically be added to project list and we can reference it from all pages in this project. So are all files added to project list automatically being referenced ? thanx EDIT: On your second question, you are adding a public class to a namespace so will be visible to other classes in that project and in that namespace. I don’t know much about assemblies, but I’d assume the reason why item( class ) added to the project is visible to other classes in that project is for the simple fact that in web project all classes get compiled into single assembly and for the fact that public classes contained in the same assembly are always visible to each other? much appreciated

    Read the article

  • On Windows 7: Same path but Explorer & Java see different files than Powershell

    - by Ukko
    Submitted for your approval, a story about a poor little java process trapped in the twilight zone... Before I throw up my hands and just say that the NTFS partition is borked is there any rational explanation of what I am seeing. I have a file with a path like this C:\Program Files\Company\product\config\file.xml I am reading this file after an upgrade and seeing something wonky. Eclipse and my Java app are still seeing the old version of this file while some other programs see the new version. The test that convinced my it was not my fat finger that caused the problem was this: In Explorer I entered the above path and Explorer displayed the old version of the file. Forcing Explorer to reload via Ctrl-F5 still yields the old version. This is the behavior I get in Java. Now in PowerShell I enter more "C:\Program Files\Company\product\config\file.xml" I cut and past the path from Explorer to make sure I am not screwing anything up and it shows me the new version of the file. So for the programming aspect of this, is there a cache or some system component that would be storing this stale reference. Am I responsible for checking or reseting that for some class of files. I can imagine somebody being "creative" in how xml files are processed to provide some bell or whistle. But it could be a case of just being borked. Any insights appreciated...Thanks!

    Read the article

  • WPD on XP, Vista, and 7 (need to transfer photo and video files)

    - by Bradley Dean
    I need to transfer files (still photos and videos) from any portable device that a user may connect (still camera, video camera, mobile phone, etc.) I don't need to worry about plain storage devices as these have drive letters. And I only care about existing files, I don't care about live video, preview video, taking new pictures, etc. I originally tried WIA, which works great except it can not transfer video files. So then I tried WPD, following along with dimeby8's tutorial: http://blogs.msdn.com/b/dimeby8/archive/2006/09/27/774259.aspx I haven't gotten the transfer working yet (I'm converting it over to C#), but I can at least see the device and enumerate the files in Win7. In XP I get nothing. It's pointed out in this thread that WPD won't enumerate devices on XP (see Lisa O [MSFT]'s post): http://social.msdn.microsoft.com/Forums/en/windowssdk/thread/56459945-b757-45df-8c9f-4ebdbbb18a2c So WIA is out because it won't do video. And WPD is out because it won't do XP. Has anyone gotten this to work? Am I missing something simple here? Thanks.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >