Search Results

Search found 39200 results on 1568 pages for 'zip files'.

Page 112/1568 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • PHP: Class to parse OGG and .ogv files?

    - by Nic Hubbard
    I am looking for a php class that can parse ogg and .ogv files so that I can get some of the metadata out of the files, such as comments, bitrate, length, etc. I have found this: http://opensource.grisambre.net/ogg/ but after testing it, it does not seem to parse and of the files that I test it with. Has anyone had luck with an alternative? I would use getID3(), but it does not support ogg video.

    Read the article

  • Getting Classic ASP to work in .js files under IIS 7

    - by Abdullah Ahmed
    I am moving a clients classic asp webapp to a new IIS7 based server. The site contains some .js files which have javascript but also classic asp in <% % tags which contains a bunch of conditional statements designed to spit out pieces of javascript based on session state variables. Here's a brief example of what the file could be like.... var arrHOFFSET = -1; var arrLeft ="<"; var arrRight = ">"; <% If ((Session("dashInv") = "True") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4"))) Then %> addMainItem("/MgmtTools/WelcomeInventory.asp?wherefrom=salesMan","",81,"center","","",0,0,"","","","",""); <% Else %> <% If (Session("dashInv") = "False") And ((Session("systemLevelStaff") = "4") Or (Session("systemLevelCompany") = "4")) Then %> <% Else %> addMainItem("/calendar/welcome.asp","",81,"center","","",0,0,"","","","",""); <% End If %> <% End If %> defineSubmenuProperties(135,"center","center",-3,0,"","","","","","",""); Currently this file (named custom.js for example) will start throwing js errors, because the server doesnt seem to recognize the asp code in it and therefore does not parse it. I know I need to somehow specify that a .js file should also be treated like an .asp file and run through parsing it. However I am not sure how to go about doing this. Here is what I've tried so far... Under the Server node in IIS under HANDLER MAPPINGS I created a new Script Map with the following settings. Request Path: *.js Executable: C:\Windows\System32\inetsrv\asp.dll Name: ASPClassicInJSFiles Mapping: Invoke Handler only if request is mapped to : File Verbs: All verbs Access: Script I also created a similar handler under the site node itself. Under MIME Types .js is defined as application/x-javascript None of these work. If I simply rename the file to have .asp extension then things work, however this app is poorly coded and has literally 100's of files with the .js files included in them under various names and locations, so rename, search and replace is the last option I have.

    Read the article

  • .Net Library for parsing source code files?

    - by Jörg Battermann
    Does anyone know of a good .NET library that allows me to parse source code files, but not only .NET source code files (like java, perl, ruby, etc)? I need programmatic access to the contents of various source code files (e.g. class/method /parameter names, types, etc.). Has anyone come across something like this? I know within .NET it is reasonably possible and there are some libraries out there, but I need that to be abstracted to more types of programming languages.

    Read the article

  • Time Machine (OSX) doesn't back up files in Mount Point or Disk Image File

    - by Chris
    Hi all, I found this Q&A (http://superuser.com/questions/148849/backup-mounted-drive-of-an-image-in-time-machine) and this prompted me to ask the following question: I have two disk images which are scripted to be mounted on login. These two disk images are always mounted to the same location. These two disk images are encrypted TrueCrypt volumes. Time Machine (TM) will only back up the disk images the first time they are mounted, but not after that. As I modify documents within the volumes throughout the day, the modified timestamps are adjusted properly. However, TM does not back them up. TM never backs up the mount points which are two folders within my home directory. Any ideas as to why neither the mount point or the image files are backed up? Do the image files have to be closed (unmounted) after being modified for TM to back them up? Thanks, Chris

    Read the article

  • LAMP stack security question - uploading files to server

    - by morpheous
    I am running Ubuntu 9.10 desktop on my home machine. I need to upload files from my local machine, to my web server, on a periodic basis. My server is running Ubuntu Server LTS. I want my server to be secure, and only run the LAMP stack and possibly, an email server. I do not (ideally) want to have FTP or anything that can allow (more) knowledgeable hackers to be able to hack into my server. Can anyone recommend how I may send files from my local machine to the server? This may seem an easy/trivial question, but I am relatively new to Linux - and I got my previous Windows server machine serious hacked in the past, hence the move to Linux, and thats why I am so security conscious.

    Read the article

  • Reset Mac OS X (Snow Leopard) File Permissions -- All Files

    - by Frank
    Is their a script or process completely reset all file system file permissions to factory default? (Less restoring from a image backup or reinstalling the OS). This would include I've affected all files from / to Applications and home folder and all contents. (Everything) I've tried to use the Disk Utility's First Aid 'Repair Disk Permissions' but it didn't seem to touch or affect everything - some but not all. I've ran it twice so far... I've seen this but it's not quite the something. Fixing mac user file permissions, not the system The reason for all of this is I accidentally ran a chmod on all files (as sudo). Working too fast, now I'm in a hole.

    Read the article

  • Best Practices for working with files via c#

    - by user177883
    Application I work on generates several hundreds of files in a 15 minutes period of times. and the back end of the application takes these files and process them (updates database with those values). One problem is database locks. What are the best practices on working with several thousands of files to avoid locking and efficiently processing these files? Would it be more efficient to create a single file and process it? or process single file at a time? What are so common best practices?

    Read the article

  • Tracking unique versions of files with hashes

    - by rwmnau
    I'm going to be tracking different versions of potentially millions of different files, and my intent is to hash them to determine I've already seen that particular version of the file. Currently, I'm only using MD5 (the product is still in development, so it's never dealt with millions of files yet), which is clearly not long enough to avoid collisions. However, here's my question - Am I more likely to avoid collisions if I hash the file using two different methods and store both hashes (say, SHA1 and MD5), or if I pick a single, longer hash (like SHA256) and rely on that alone? I know option 1 has 288 hash bits and option 2 has only 256, but assume my two choices are the same total hash length. Since I'm dealing with potentially millions of files (and multiple versions of those files over time), I'd like to do what I can to avoid collisions. However, CPU time isn't (completely) free, so I'm interested in how the community feels about the tradeoff - is adding more bits to my hash proportionally more expensive to compute, and are there any advantages to multiple different hashes as opposed to a single, longer hash, given an equal number of bits in both solutions?

    Read the article

  • Need help manipulating WAV (RIFF) Files at a byte level

    - by Eric
    I'm writing an an application in C# that will record audio files (*.wav) and automatically tag and name them. Wave files are RIFF files (like AVI) which can contain meta data chunks in addition to the waveform data chunks. So now I'm trying to figure out how to read and write the RIFF meta data to and from recorded wave files. I'm using NAudio for recording the files, and asked on their forums as well on SO for way to read and write RIFF tags. While I received a number of good answers, none of the solutions allowed for reading and writing RIFF chunks as easily as I would like. But more importantly I have very little experience dealing with files at a byte level, and think this could be a good opportunity to learn. So now I want to try writing my own class(es) that can read in a RIFF file and allow meta data to be read, and written from the file. I've used streams in C#, but always with the entire stream at once. So now I'm little lost that I have to consider a file byte by byte. Specifically how would I go about removing or inserting bytes to and from the middle of a file? I've tried reading a file through a FileStream into a byte array (byte[]) as shown in the code below. System.IO.FileStream waveFileStream = System.IO.File.OpenRead(@"C:\sound.wav"); byte[] waveBytes = new byte[waveFileStream.Length]; waveFileStream.Read(waveBytes, 0, waveBytes.Length); And I could see through the Visual Studio debugger that the first four byte are the RIFF header of the file. But arrays are a pain to deal with when performing actions that change their size like inserting or removing values. So I was thinking I could then to the byte[] into a List like this. List<byte> list = waveBytes.ToList<byte>(); Which would make any manipulation of the file byte by byte a whole lot easier, but I'm worried I might be missing something like a class in the System.IO name-space that would make all this even easier. Am I on the right track, or is there a better way to do this? I should also mention that I'm not hugely concerned with performance, and would prefer not to deal with pointers or unsafe code blocks like this guy. If it helps at all here is a good article on the RIFF/WAV file format.

    Read the article

  • Powershell: Read value from xml files

    - by DanielR
    I need some with help with PowerShell, please. It should be pretty easy: I have a list of subdirectories, with a xml file in each one. I want to open each xml file and print the value of one node. The node is always the same, as the xml files are actually project files (*.csproj) from Visual Studio. I already got the list of files: get-item ** \ *.csproj How do I proceed?

    Read the article

  • Editing tags in bunch of MP3 files?

    - by overtherainbow
    I have a bunch of files whose filename looks like "<Track> - <Title> - <Artist>.mp3". I'd like to rewrite their MP3 Title tag to be "<Track> - <Title>.mp3" so that each song is displayed correctly on my elcheapo MP3 player. Since I rarely edit MP3 files anyway, MP3 Tag Tools 1.2 Build 008 from 2003 was good enough, but I can't figure out how to do this with this application. I just tried MP3Tag 2.46, but couldn't figure out if it can do this (created a new Action, to no avail). I also tried TagScanner 5.1.558, without success. Does someone know of a good, free Windows application that can do this? Thank you.

    Read the article

  • How to batch rename files using bash

    - by Alex Popov
    I know there are lots of such questions, but I couldn't find one (or a combination of several), which describes the things I want to do. I think I need to use regular expressions, but I am not very good with that. I use zsh. I have a folder with files, which I want to rename: I want the files challenge1.rb, challenge2.rb, challenge3.rb, etc. to be renamed to c1.rb, c2.rb etc. Similarly task1.rb and similar must be renamed to t1.rb etc. sample_spec_c1.rb, sample_spec_c2.rb etc. must be renamed to c1_spec.rb, c2_spec.rb etc. So I guess I need some combination of regular expressions and iteration, but I don't know how to write the bash script.

    Read the article

  • Windows 8 - can't drag files from Explorer and drop on applications

    - by FerretallicA
    In Windows 8 I find I can't drag files to applications like I've been able to do for as long as I can remember. Example: Drag MP3s to Winamp Drag folder full of music to Winamp Drag videos to VLC Drag txt, reg etc files to Notepad I have tried various combinations of: Running Explorer as administrator Running drop target as administrator Taking ownership of drop target application's folder Taking ownership of Explorer Changing user account to administrator Create a new user account Lowering UAC level Disabling UAC in GUI Disabling UAC in registry Running Explorer folders in a separate thread This is the last straw if there's no known proper (ie non hacky compromise) fix for this. "Little" things like this combined are a productivity nightmare and if I have to relearn so much and configure so much to get basic things done with an OS I might as well just move to Linux once and for all.

    Read the article

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • "Show hidden files" option is not working

    - by crazygamer
    OK, I know this is the very basic thing that goes with Windows, but I am asking it here in search of answer. I put my pendrive and it autoruns. This changed the show hidden files option off, I mean I am not able to see my hidden files as it is not applying the changes. What is the registry file that has modified? I have scanned my computer using 4 antivirus programs. BitDefender found and deleted something in temperary folder. The rest didn't showed anything. I have encountered this problem a few more times but this time I don't want to format it ;-)

    Read the article

  • Restore more than 250 files using DPM 2010 and PowerShell

    - by toryan
    I've got what should be a fairly simple task: restore the following files from DPM: D:\inetpub\wwwroot\*\index.* I followed the instructions in this TechNet wiki and pretty much thought I had it. Unfortunately, the New-SearchOption commandlet can only return 250 results, and this search would generate way more results than that. So actually only the first 250 files were restored, which is no use to anybody. Does anyone know of any way to get around the 250 search results limit? I guess it would be possible to get the subdirectories of D:\inetpub\wwwroot and loop through them in turn, but I kind of want to keep this fairly simple as it is only for this task.

    Read the article

  • rsync error: some files/attrs were not transferred

    - by Daniel Ball
    Using rsync(ubuntu) and a DeltaCopy server on W2K3 to back up some of the data on the file server before I migrate from W2K3 to Ubuntu server. After it completed I ran a dry run just in case something had been missed or changed ... I got the following: sudo rsync -az -n 198.3.9.25::Music /mnt/raid/music [sudo] password for daniel: file has vanished: "?????\#267????" (in Music) file has vanished: "????????" (in Music) ... rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1526) [generator=3.0.7] I just want to make sure I'm reading it right, that somehow there are files on the receiving end that aren't on the sending?

    Read the article

  • Win 7 accessing large files uses 100% RAM

    - by user181276
    Running Win 7 64-bit SP1 with 8 GB RAM. I first noticed this problem when using the GUI to copy some large (5+ GB) files from one disk to another. What happens is the physical memory in use rises quite quickly to 100% and the system comes to a crawl. If I just start to access the file in a media player (it is a movie) the memory usage climbs up slowly but eventually reaches 100%. When copying the same files via XCOPY I do not have this problem. Using RAMMAP I see most of the memory usage is under "Mapped File" and is allocated under the "Active" column. If I select "Empty System Working Set" the RAM usage drops back down but then starts to climb back up. Any ideas on what I can check/test to eliminate this issue?

    Read the article

  • Basic video editor for AVCHD Lite files?

    - by davr
    I have a camera (Panasonic Lumix GF-1) that outputs "AVCHD Lite" files, 720p h264 in a MTS container. I saw this question that said Movie Maker in Windows 7 supports AVCHD...but I just tried, and unfortunately it does not support AVCHD Lite. Are there any free or inexpensive non-linear video editors (NLE) that can natively handle AVCHD Lite files, without requiring some 3rd party driver? If not, are there any 3rd party drivers that are especially stable? (In my experience they usually have some problems...I got AVCHD Lite loading into VirtualDub using a 3rd party plugin, but it's very slow and sometimes crashes, and seeking takes ages.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >