Search Results

Search found 37883 results on 1516 pages for 'sparse files'.

Page 113/1516 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • Synchronizing audio and video using MP4Box / ffmpeg to concatenate files

    - by jdl2003
    I have two H.264 encoded MPEG-4 files that I need to concatenate. I have been using MP4Box for this task by first ensuring the files are encoded identically (even went so far as to compare output from h264_parse on their video tracks) and then concatenating with this command: MP4Box -cat file1.mp4 -cat file2.mp4 output_file.mp4 This works and the output file is playable, but on playback in Quicktime or VLC the second video's audio starts too soon, making the entire second part of the concatenated file out of sync. I have tried reencoding the output through ffmpeg with -vcodec copy and -acodec copy but the sync issue persists. I have also tried converting to MPEG-2 format first, concatenating with a simple cat file1.mpg file2.mpg > output.mpg and reencoding the result with ffmpeg. This was even worse. I know that I can pass commands to MP4Box to adjust the start time of the audio track, but I am trying to do this programmatically for any input video (in the same encoding of course). I am looking for possible solutions that would not require human intervention / manual adjustments. Or, at least, an understanding of what is happening to make the second part of the concatenated video go out of sync.

    Read the article

  • gzip specific files

    - by byTheDrop
    for some reason these files are not gzipping on my apache server, chrome network tab shows this. Is there a specific directive I can add to htaccess to cache these files? Compressing the following resources with gzip could reduce their transfer size by about two thirds (~680.45KB): adae8bc4c3cb52cbe22358aaced87a72.css could save ~607B css_f91fa8d73b5e7661d6dcf9e58395e533.css could save ~59.54KB jquery.min.js could save ~37.27KB drupal.js could save ~6.15KB auto_image_handling.js could save ~6.72KB lightbox.js could save ~29.38KB superfish.js could save ~2.42KB jquery.bgiframe.min.js could save ~1011B jquery.hoverIntent.minified.js could save ~1.05KB nice_menus.js could save ~581B panels.js could save ~531B jquery.pngFix.js could save ~2.98KB jquery.cycle.all.min.js could save ~20.20KB views_slideshow.js could save ~8.76KB views_slideshow.js could save ~9.02KB wanderlust_custom_videos.js could save ~598B wl_helper.js could save ~777B extlink.js could save ~2.88KB cufon-yui.js could save ~11.89KB googleanalytics.js could save ~1.48KB swfobject.js could save ~6.65KB jquery.jcarousel.min.js could save ~10.19KB jcarousel.js could save ~6.01KB Akzidenz_Grotesk_BE_Super_800.font.js could save ~14.27KB Akzidenz_Grotesk_BE_Bold_700.font.js could save ~12.96KB Akzidenz_Grotesk_BE_Cn_400.font.js could save ~13.39KB SuperCondensed_500.font.js could save ~24.40KB FuturaBold_700.font.js could save ~26.19KB Futura_500.font.js could save ~57.70KB SuperGroteskB_500.font.js could save ~23.86KB jquery.cookie.js could save ~1.25KB wanderlust.js could save ~1.69KB sliderbottom.js could save ~442B jcarousellite_1.0.1.min.js could save ~4.60KB jcarousellite_control.js could save ~224B sitesdropdown.js could save ~1.09KB widgets.js could save ~50.13KB cufon-drupal.js could save ~599B swfobject_api.js could save ~348B ga.js could save ~24.02KB all.js could save ~124.67KB tweet_button.1347008535.html could save ~38.43KB xd_arbiter.php could save ~16.80KB xd_arbiter.php could save ~16.80KB

    Read the article

  • Which httpd.conf and php.ini files does plesk use

    - by Saif Bechan
    If i disable the include_path in /etc/php.ini, i can still see that there is an include path loaded from somewhere. I want to know where this file is. If i use phpinfo(); on my domain i get that there is a include path /usr/share/pear:/usr/share/php, even though in the php.ini i used: ;inlucde_path=".:" In the section for aditional php.ini files this is the list: /etc/php.d/curl.ini, /etc/php.d/dbase.ini, /etc/php.d/dom.ini, /etc/php.d/gd.ini, /etc/php.d/imap.ini, /etc/php.d/json.ini, /etc/php.d/mbstring.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/wddx.ini, /etc/php.d/xmlreader.ini, /etc/php.d/xmlwriter.ini, /etc/php.d/xsl.ini, /etc/php.d/zip.ini I have checked all these files but i can nowhere find a reference to pear. This said the /etc/httpd/conf/httpd.conf file is questioning to me too. If i check this file i can see the following values are defined. DocumentRoot "/var/www/html" <Directory /> Order Deny,Allow Deny from all Options None AllowOverride None </Directory> <Directory "/var/www/html"> Options None AllowOverride None Order allow,deny Allow from all </Directory> The strange thing is that i don't even use this directory, so where does plesk get his httpd.conf file from. I want to check if everything is ok, because my vhost.conf is not working. I don't know if i should change these values or not, and where i can find the values plesk uses. i hope someone can help me with this.

    Read the article

  • Nagios Apache Config with PHP-FPM downloading cgi files

    - by tubaguy50035
    I'm trying to setup Nagios 3 under Apache 2.4 with PHP-FPM. I've run into a couple problems I could use help with. The PHP side of things seems to be working, I can see the home page and the sidebar. But all of the CGI files are downloading instead of executing, and when I try to click on "Read What's New In Nagios Core 3", I get an error /nagios3/docs/whatsnew.html was not found on this server. Below is my vhost config for Nagios. <VirtualHost *:300> # apache configuration for nagios 3.x ScriptAlias /cgi-bin/nagios3 /usr/lib/cgi-bin/nagios3 ScriptAlias /nagios3/cgi-bin /usr/lib/cgi-bin/nagios3 # Where the stylesheets (config files) reside Alias /nagios3/stylesheets /etc/nagios3/stylesheets # Where the HTML pages live Alias /nagios3 /usr/share/nagios3/htdocs ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9001/usr/share/nagios3/htdocs/$1 <DirectoryMatch (/usr/share/nagios3/htdocs|/usr/lib/cgi-bin/nagios3|/etc/nagios3/stylesheets)> Options FollowSymLinks ExecCGI AllowOverride AuthConfig Order Allow,Deny Allow From All AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios3/htpasswd.users require valid-user </DirectoryMatch> <Directory /usr/share/nagios3/htdocs> Options +ExecCGI </Directory> </VirtualHost> I also added this in my global Apache config: AddHandler cgi-script .cgi Any help or instructions you can give me would be much appreciated. If more information is needed, let me know.

    Read the article

  • Apache2 refuses to process php files - "Snow Leopard" OSX 10.6.4

    - by w-01
    I have a macbook pro i5. my understanding is that by default it should be able to serve php5. i have uncommented the relevant line in /etc/apache2/httpd.conf LoadModule php5_module libexec/apache2/libphp5.so I have restarted apache with sudo apachectl -k restart and when i try to access a file with a php extension, Apache prompts me to download the file. i.e. instead of processing the php and sending me html, it thinks i want to download the file.... when i look in apache error log i see this [Fri Nov 12 10:16:14 2010] [notice] Apache/2.2.14 (Unix) PHP/5.3.2 mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.2 Python/2.6.1 configured -- resuming normal operations so it looks like php5 is loading properly. I'd like to know either: How do i fix this? or How do I reinstall apache2 so that it's like i just installed the os? thanks in advance update @Zayne - the end of my httpd.conf has Include /private/etc/apache2/other/*.conf and i have a file /etc/apache2/other/php.conf with the contents <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> @Zayne I've already copied php.ini.default to php.ini in the same folder. when i run sudo apachectl configtest i get /usr/sbin/apachectl: line 82: ulimit: open files: cannot modify limit: Invalid argument httpd: Could not reliably determine the server's fully qualified domain name, using ::1 for ServerName Syntax OK furthermore i decided to try apachectl -M which shows all loaded modules Most importantly in the list of loaded modules i got Loaded Modules: php5_module (shared) Since the module is being loaded, it seems like the issue has more to do with making apache use php engine to process the php files.... so something wrong with the ifmodule directive?

    Read the article

  • Bind9 zone files

    - by user42780
    Well for the better part of the last two hours I've tried to figure out what is actually wrong, but I can't seem to find anything obvious to me. What I'm trying to do is setup my DNS for say(per example) domain.com. This should include two NS records, namely ns1.domain.com and ns2.domain.com. With that there should be a mail record, as well as a CNAME record for www. I've been trough roughly 20 how to's in the last two hours, rewrote everything from scratch four times and I still can't seem to find whats wrong. My only suspicion to this might be two things; the error I get from the bind9 daemon when I stop the service, and the named.conf file. The error I get from the bind9 daemon when stopping the service is: * Stopping domain name service... bind9 rndc: connection to remote host closed This may indicate that * the remote server is using an older version of the command protocol, * this host is not authorized to connect, * the clocks are not syncronized, or * the key is invalid. I honestly doesn't know what this means, apart from the key defined in /etc/bind/rndc.key that's not in the named.conf file(yes, I did try to add it to no avail). Here's all the zone files, and configuration files; http://208.77.101.5/bind9/ If anyone could help, it would be greatly appreciated.

    Read the article

  • Permission denied when running Rails app in VirtualBox Ubuntu guest with files on Windows host

    - by Ola Tuvesson
    I think I'm close to having my dev environment set up exactly the way I want, but one final snag remains. I'm running VirtualBox on a Windows 7 64bit host, with my dev enviroment inside a Ubuntu 12.04 guest. I want to keep the files for my projects on the host filesystem - partly so I can access them when the Ubuntu guest is not running, but also so I can use Tortoise and other Windows based tools (cough Photoshop), and it also eases my backup scheme somewhat. So I've got a folder "Rails" on my NTFS drive, which I've shared (Samba) from the host with a user specifically created for the Ubuntu guest. The mount point has been set up and an entry added to fstab (cifs), using a credentials file and the options iocharset=utf8,mode=0777,dir_mode=07??77 This mounts fine and my Ubuntu user has both read and write permissions to the contents. But when I try to start my Rails app I get permission errors on any files the app needs to write to (e.g. the log file) - why is that? Are there any major conceptual flaws with this approach? Would I be better off using the VBox "shared folders" function?

    Read the article

  • Software for handling camera RAW-files

    - by Eikern
    I use a digital SLR as most other photographers do today and have quickly realised that capturing images using camera-RAW files is the way to go. Personally I use Adobe Lightroom to handle my photo library, but I know there are other software available like Apple Aperture. These applications are quite hard to use for a novice, and are quite expensive too. I've often recommended other photographers to switch to camera-raw, but they won't do it because Windows can't handle it natively. Are there any free or cheaper applications out there that can do simple file handling and adjustments? Preferably so simple that my mom can do it. I know Nikon offers a codec that allows you to view NEF-files natively inside Windows, but still limits the uses of the file and slows the system down if the file is big. Does anybody know of a drag-and-drop application that converts camera-raw to JPG on-the-fly? In case I or someone would need to upload an image to the web or use it inside a word-document. Thanks.

    Read the article

  • NTFS: Deny all permissions for all files, except where explicitly added

    - by Simon
    I'm running a sandboxed application as a local user. I now want to deny almost all file system permissions for this user to secure the system, except for a few working folders and some system DLLs (I'll call this set of files & directories X below). The sandbox user is not in any group. So it shouldn't have any permissions, right? Wrong, because all "Authenticated Users" are a member of the local "Users" group, and that group has access to almost everything. I thought about recursively adding deny ACL-entries to all files and directories and remove them manually from X. But this seems excessive. I also thought about removing "Authenticated Users" from the "Users" group. But I'm afraid of unintended side-effects. It's likely that other things rely on this. Is this correct? Are there better ways to do this? How would you limit the filesystem permissions of a (very) non-trustworthy account?

    Read the article

  • Server reporting incorrect mime type for css files

    - by Becky
    We have a VPS server that we host our websites on. I have written a CMS using CodeIgniter. On one of the interfaces, I am attempting to upload a css file to the system. This worked correctly when we had it hosted on shared hosting. Since we've moved it to the VPS, I am getting an "incorrect filetype" error. It all comes down to the fact that the server is reporting a mime type of text/x-c for the css file rather than text/css. I logged in via shell and ran the following command on an existing valid css file (to make sure it wasn't an issue with either CodeIgniter or with php). file --brief --mime 'filename.css' 2>&1 The server gave me the following in response to my command: text/x-c; charset=us-ascii My question ... is there some sort of server setting that I need to tweak to get the server to correctly identify the css file as text/css? Do I just have to add a mime type for the css files to the server? I found the mime types file (etc/mime.types), and it just hase video types and a couple other that I have no idea what they are. There is nothing in there for css or images or html files. Unless I'm looking in the wrong spot. I'm not a server person, so I'm hoping someone can help me out. Some server specs: Apache/2.2.22 (Unix) php 5.3.13 Server API = CGI/FastCGI the fileinfo php extension appears to be disabled

    Read the article

  • Windows 7 Media Center PC not displaying MKV files properly

    - by David
    Does anyone know why Windows Media Center wouldn't correctly display a 16x9 video? I just upgraded my home-built HTPC from Vista Ultimate (hey, it was a giveaway) to Windows 7 Ultimate - clean install. After that, I installed the Divx7 beta (to get .MKV support) and AC3filter (so I could HEAR the MKV files). Previously, under Vista Ultimate (32 bit), I had Arcsoft's Total Theater Pro for playing Blu-Rays and MKV decoding under Media Center. Now when I play a 720p 16x9 MKV file, I get some 'letterboxing' - like it's halfway between 4x3 and 16x9 - with the aspect ration looking alightly squished as a result. Here's the wierd part. If I use the Media Center connection software in my XBox 360 - it plays perfectly, filling the 16x9 screen edge-to-edge, just like Vista's WMC software USED to. Of course, beats the network up because the XBox goes to the HTPC, the HTPC goes to my WHS machine to fetch the data, it comes back to the HTPC, gets transcoded and streamed back to the Xbox. I'm running the latest drivers for an NVidia card (as fetched by Windows 7). I have no idea WHY this is because if I play "ordinary" (i.s. SD) Divx files that are 16x9, they play just fine, scaled right up to my screen's edges. It puzzles me as to why the same machine that properly converts the bits for the Xbox can't display/scale them properly for the attached display. Mind you, Windows Media Player exhibits the same symptoms. Ideas?

    Read the article

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

  • How to transfer files via infrared on Linux?

    - by arielnmz
    I know this is a way too old technology but I've got some files inside a very old cellphone that I need to transfer to a very old computer. So far my Infrared USB device works well, it's detected by the machine (lsusb output): Bus 002 Device 002: ID 0df7:0620 Mobile Action Technology, Inc. MA-620 Infrared Adapter I've tried to send the file over MMS and even email (it lacks bluetooth, not to mention USB). But this cellphones's firmware doesn't let me attach the files. The file was originally transfered via IrDA, and it only has an internal memory (a whole 2 million bytes! whoa!). I found a package called irda-utils, but it seems that there are only two executables: irdaping and irdadump. I think the dump utility might do the job (which as far as I can see it's kind of a version of tcpdump but for IrDA), but I don't even know how to process the received frames. Could this question may be what I'm looking for? EDIT While reading through the Linux Infrared HOWTO I found about the OpenObex project, which may be what I'm looking for...

    Read the article

  • Using AddEncoding x-gzip .gz without actual files

    - by STATUS_ACCESS_DENIED
    With Apache (2.2 and later) how can I achieve the following. I want to transparently compress using GZip encoding (not plain Deflate) the output when a certain file is queried with its name plus the extension .gz, where the .gz version doesn't physically exist on disk. So let's say I have a file named /path/foo.bar and no file foo.bar.gz in the folder to which the URI /path maps, how can I get Apache to serve the contents of /path/foo.bar but with AddEncoding x-gzip ... applied to the (non-existing) file? The rewrite part appears to be easy, but the problem is how to apply the encoding to a non-existent item. The other way around also seems to be simple as long as the client supports the encoding. Is the only solution really a script that does this on the fly? I'm aware of mod_deflate and mod_gzip and it is not what I'm looking for - at least not alone. In particular I need an actual GZIP file and not just a deflated stream. Now I was thinking of using mod_ext_filter, but couldn't bridge the gap between rewriting the name of the (non-existent) file.gz to file on one side and the LocationMatch on the other. Here's what I have. RewriteRule ^(.*?\.ext)\.gz$ $1 [L] ExtFilterDefine gzip mode=output cmd="/bin/gzip" <LocationMatch "/my-files/special-path/.*?\.ext\.gz"> AddType application/octet-stream .ext.gz SetOutputFilter gzip Header set Content-Encoding gzip </LocationMatch> Note that the header for Content-Encoding isn't really needed by the clients in this case. They expect to see actual GZIP files, but I want to do this on-the-fly without caching (this is a test scenario).

    Read the article

  • Protocol to mount fat32 network filesystem on Linux with ability to lock files ( not advisory locks

    - by nagul
    I have a fat32 filesystem sitting on a NAS storage device (nslu2) that I need to mount on my Ubuntu system. I've tried Samba and NFS mounts, but both don't seem to support proper locking. More specifically, I am unable to save files to the mounted drive through GNUcash, KeepassX etc, which makes the share fairly useless. Is there a protocol that allows me to achieve this ? Note that the NAS storage device is running a linux OS so I can run pretty much any protocol that has a linux implementation. The only option I'm not looking for is to reformat the partition to ext3, which I'm not able to do due to other constraints. Alternatively, has anyone managed proper locking of a fat32 system over the network using Samba ? Or, is advisory locking the best you get with a network-mounted fat32 file system ? I've thought of trying sshfs but I've not found any indication that this will solve my problem. Edit: Okay, maybe I can reformat the drive, but to any file system except ext3. The "unslung" nslu2 doesn't like more than one ext3 drive, and I already have one attached. So any solution that involves reformatting the drive to ntfs, hfs etc is fine, as long as I can mount it on linux and lock files.

    Read the article

  • Copying large files from USB devices to the internal hard drive fails on Mac OS

    - by John M. P. Knox
    I have a second-generation 13" MacBook Air running Mac OS X 10.6.6 with a 2.13 GHz processor, 4 GB of RAM, and a 256 GB SSD hard disk. I often get failures when I attempt to copy a large file or large collection of files from an external USB drive (typically a "Firewire" generation Drobo) to the internal drive. The failure behaves almost exactly as if I had pulled the USB cable from the computer in mid-transfer. I get a warning that I have removed the hard disk improperly. After this event, the drive no longer appears mounted in the finder, and I have to unplug and reinsert the USB cable to mount the drive again. I have also seen a similar problem when using Aperture 3 to import a large number of photos and videos from a USB Compact Flash card reader. The import will fail and I will have to unplug the Card Reader and import the missing items. Oddly, reversing the direction of the copy seems pretty reliable. I've never had a problem copying a large file to a USB device, meaning that I have quite a few large files which are stranded on my Drobo. Model Identifier: MacBookAir3,2 Boot ROM Version: MBA31.0061.B01 I have seen a similar issue reported on Apple's website: http://discussions.apple.com/thread.jspa?threadID=2648590&tstart=0 The only suggested resolutions there seems to be switching to another form of connectivity (e.g. firewire, which does not exist on MacBook Air), or downgrading to Mac OS 10.6.4, or reverting the USB kernel extensions to the 10.6.4 versions: http://discussions.apple.com/message.jspa?messageID=12566073#12582956 I'm not too keen on the idea of downgrading kernel extensions. Does anyone know of a hardware revision without this issue that I can trade up to? Are there any other potential solutions out there?

    Read the article

  • Need assistance making a batch file for renaming files in separate folders

    - by Carnaxus
    Ok, here's one for you. I'm trying to use a batch file to rename a bunch of files, but none of them are in the same folder as the batch file itself. The command prompt keeps telling me that the directory can't be found. I suppose I could just rename all the files in all the folders that match the filename, but I don't want to do that either; I only want to change certain ones. My batch file as it stands is: @echo off ren "engine/info.txt" "disabled.txt" ren "gravplating/info.txt" "disabled.txt" ren "HAWX content/info.txt" "disabled.txt" ren "laserz/info.txt" "disabled.txt" ren "NeuroNaval/info.txt" "disabled.txt" ren "NeuroPlanes/info.txt" "disabled.txt" ren "NeuroTanks/info.txt" "disabled.txt" ren "NeuroWeapons/info.txt" "disabled.txt" ren "WAC Base/info.txt" "disabled.txt" ren "WAC DamageSystem/info.txt" "disabled.txt" ren "WAC GravityController/info.txt" "disabled.txt" ren "WAC Helicopters/info.txt" "disabled.txt" ren "WAC Sweps/info.txt" "disabled.txt" ren "weapons/info.txt" "disabled.txt" ren "AFF_ships/info.txt" "disabled.txt" ren "AntiTakeRifle/info.txt" "disabled.txt" ren "Catmull-Rom Cameras/info.txt" "disabled.txt" ren "Displacer Cannon/info.txt" "disabled.txt" ren "Drumdevil's Trains/info.txt" "disabled.txt" ren "EVEOnline/info.txt" "disabled.txt" ren "gm_botmap_v3/info.txt" "disabled.txt" ren "gm_construct_flatgrass_v5-2/info.txt" "disabled.txt" ren "gm_mobenix_v3_final/info.txt" "disabled.txt" ren "gm_mobenix_v3_highquality_Water/info.txt" "disabled.txt" ren "gm_snabbansairfield_b1/info.txt" "disabled.txt" ren "gm_XhS_construct/info.txt" "disabled.txt" ren "linedraw/info.txt" "disabled.txt" ren "ModelManipulator/info.txt" "disabled.txt" ren "NeuroCars/info.txt" "disabled.txt" ren "Propeller Engine/info.txt" "disabled.txt" ren "VanDookie and Predaaator's pack/info.txt" "disabled.txt" ren "WAC ECM/info.txt" "disabled.txt" ren "WAC Extra Helicopters/info.txt" "disabled.txt" echo Done! pause

    Read the article

  • Join mp4 files in linux

    - by Jose Armando
    I want to join two mp4 files to create a single one. The video streams are encoded in h264 and the audio in aac. I can not re-encode the videos to another format due to computational reasons. Also, I cannot use any gui programs, all processing must be performed with linux command line utilities. FFmpeg cannot do this for mpeg4 files so instead I used MP4Box e.g. MP4Box -add video1.mp4 -cat video2.mp4 newvideo.mp4 unfortunately the audio gets all mixed up. I thought that the problem was that the audio was in aac so I transcoded it in mp3 and used again MP4Box. In this case the audio is fine for the first half of newvideo.mp4 (corresponding to video1.mp4) but then their is no audio and I cannot navigate in the video also. My next thought was that the audio and video streams had some small discrepancies in their lengths that I should fix. So for each input video I splitted the video and audio streams and then joined them with the -shortest option in ffmpeg. thus for the first video I ran avconv -y -i video1.mp4 -c copy -map 0:0 videostream1.mp4 avconv -y -i video1.mp4 -c copy -map 0:1 audiostream1.m4a avconv -y -i videostream1.mp4 -i audiostream1.m4a -c copy -shortest video1_aligned.mp4 similarly for the second video and then used MP4Box as previously. Unfortunately this didn't work either. The only success I had was when I joined the video streams separetely (i.e. videostream1.mp4 and videostream2.mp4) and the audio streams (i.e. audiostream1.m4a and audiostream2.m4a) and then joined the video and audio in a final file. However, the synchronization is lost for the second half of the video. Concretelly, there is a 1 sec delay of audio and video. Any suggestions are really welcome.

    Read the article

  • Word 2013 can't compare readonly files

    - by Moshe Katz
    I am using Tortoise SVN to work with a repository that contains some documentation saved as Word documents. On my old computer, with Office 2010, I was able to compare with previous revisions. Tortoise would open Word in compare view so I could see the differences between the files. I have installed Office 2013 (final version from Technet, not the preview version) on my new laptop for testing and now I can no longer compare Word Documents. Tortoise pops up a generic error that it was unable to compare the two files. Tortoise uses a JScript file to interface with Word, so I ran that file through a debugger and found that the actual error is: The Compare method or property is not available because this command is not available for reading. Some Googling followed by some testing revealed that the error is caused by the first file opened (in this case, the previous version) being opened as Read-Only. If I change the JScript code to open in normal mode, and I find the file on the system and un-check the "Read Only" property (if necessary), then the comparison opens as expected. I was unable to find any documentation about this change to Word on any Microsoft site. Does anyone know why this has been changed, and if it is intentional and not a bug, what the benefit is of requiring the file to be writable in order to compare it with another? Note: This is tagged word-2013-preview but it is actually for the release version of Word that is available on MSDN and Technet. I do not have enough rep. on this site to create new tags (yet).

    Read the article

  • django : Serving static files through nginx

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • How to uninstall files and software from a Solid State Drive (SSD)

    - by jasondavis
    I am using a SSD drive just as a main Operation system and software drive. I have a regular spinning disk for data files. I have read that files are not REALLY deleted from a SSD drive when you "delete" them. So here is my situation and hopefully I can get some good advice. Let's say I have Windows 7 Pro x64bit installed on my 80gb Intel SSD. I then have Adobe Photoshop CS4 installed along with 50 other programs installed onto this SSD. I then decide I am done with Adobe Phjotoshop CS4 and want to remove it. I then decide I want to install another version of Adobe Photoshop and a few other software titles. Would I just do the usual, add/remove software from the windows control panel. Or if the software being removed has an "Uninstall" program to run, then run it and uninstall the software? I realize that all this WILL remove the software from Windows 7 but I am wanting to know if there is additional steps that should be taken since it is a solid state drive (SSD)?

    Read the article

  • Where to get grub files without using grub-install

    - by Jacky
    I am in a particular situation. I have a MacBook Pro with no internal CD drive and both MacOS X (minimal setup) and Linux (my main system) is installed. During a cross-upgrade to Ubuntu 12.04 I messed up grub, so that my /boot/grub directory is basically empty. This means I can't boot Linux on the laptop anymore but only get into grub rescue. Normally this is no issue as you'd just boot from a rescue CD or USB stick, but unfortunately with a MacBook Pro this is not possible (I have reFIT installed and it attempts to boot, but it fails and the manual says that Apple's EFI firmware is not able to handle this situation). From MacOS X, however, I still have write access to the Linux partition. I've now been trying to figure out how to populate the /boot/grub folder with the necessary files, to no avail so far. The ISO image of Ubuntu 12.04 contains an EFI folder which is not what I am looking for, instead I need the normal.mod files for the grub version of Ubuntu 12.04. I do not have any other machine to set up a virtual machine of Ubuntu 12.04 to extract this from after a grub-install, so I am asking for ideas here how to solve this mess. P.S.: I installed the Linux previously when I still had a working internal CD drive. This is gone now.

    Read the article

  • `:Zone.Identifier` files keep on appearing in Windows XP virtual machine

    - by Jonathan Reno
    I have a Windows XP Home Edition guest and a Linux Mint 13 host. I use VirtualBox and the ~/Public directory is shared with the guest. It sometimes happens that I use IE on the guest system to download files (until I get a better Windows browser). All of the downloaded files go the the L:\ drive (the ~/Public directory). When they are finished downloading, Windows Explorer adds a :Zone.Identifier file for each file I download. When I extract a downloaded ZIP archive on the guest (on drive L:\), Windows creates a :Zone.Identifier file for every file in the extracted directory. This even occurs if I use the host to move a file to the ~/Public directory. The shared ~/Public directory is on an ext4 partition and the colon character is supposed to be illegal in file names in Windows, but not on the ext4 partition. Is there any way to stop Windows from putting all this rubbish on my filesystem? (I might have to create a shell script to clean up after Windows' act.) Here is what I see in Windows Explorer: By the way, if I were running a Mac OS X host (where colons are illegal file name characters) this would be even more horrendous.

    Read the article

  • Mac creating files w/ wrong perms on samba share

    - by geoffjentry
    In my group, which is very heterogeneous in terms of machines, we use a samba share to collaborate on files and such. In all but one case, it works as expected (or at least close enough). The one exception is my boss' laptop, a snow leopard macbook air. On his desktop (also snow leopard), if he creates a file it ends up serverside with perms of 774, but when he creates it with the Air, the perms are 644. The key problem is the lack of group write permission on the laptop created files. What's really confusing is that everything that I've looked at on the two machines are identical - same version of OS X, same version of samba (3.0.25b-apple), same settings for the same software, etc. I can't imagine why one machine would be different than the other, but it is. To try to be complete w/ the description, here is the relevant portion of my smb.conf file: comment = my Share path = /path/to/share public = no writeable = yes printable = no force group = myshare directory mask = 0770 create mask = 0770 force create mode = 0770 force directory mode = 0770 EDIT: I looked at three more Macs and all of them worked as expected which leaves this one laptop the true oddball. This wasn't as good as a test as the others though, as they were all leopard.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >