Search Results

Search found 10417 results on 417 pages for 'large'.

Page 9/417 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Compress Large Video Files with DivX / Xvid and AutoGK

    - by DigitalGeekery
    Have you ever recorded home video on a camcorder only to find the video size is enormous? What if you wanted to share a video clip on YouTube or another video sharing site, but the file size was bigger than the maximum upload size? Today we’ll look at a way to compress certain video files, such as MPEG and AVI, with Auto Gordian Knot (AutoGK). AutoGK is a free application that runs on Windows. It supports Mpeg1, Mpeg2, Transport Streams, Vobs, and virtually any codec used for an .AVI file. AutoGK will accept as input the following file types: MPG, MPEG, VOB, VRO, M2V, DAT, IFO, TS, TP, TRP, M2T, and AVI. Files are output as .AVI files and are converted using the DivX or XviD codecs. Installing and Using AutoGK Download and install AutoGK (link below) Open the AutoGK. You’ll need to navigate a few wizard screens, but you can just accept the defaults.   Choose your video file by clicking on the folder to the right of the Input file text box.   Browse for and select your video file and click “Open.”   For this example, we’ll be working with an .AVI file that’s 167MB in size.   The output file is copied into the same directory as the input file by default, but you can change this if you choose. If the input file is also .AVI, AutoGK will append an _agk to the output file so that the original is not overwritten. Next, you’ll see any audio tracks listed. You can unselect the check box if you’d like to remove the audio track. You can choose one of the Predefined size options… Or, select a Custom size in MB or Target Quality in percentage. For our example, we’ll be compressing our 167MB file to 35MB. Click on Advanced Settings. Here you can choose your codec, if you have a preference, as well as output resolution and output audio. If you’d like to use the DivX codec, you’ll need to download and install it separately. (See link below) Typically you’ll want to keep the defaults. Click “OK.” Now you’re ready to add your file conversion job to the Job queue. Click Add Job to add it to the queue. You can add multiple files conversions to the job queue and  convert them in one batch. Click Start to begin the conversion process. The process will begin. You’ll be able to see the progress in the Log window on the bottom left. When the conversion is complete you’ll see a “Job finished” and the total time in the log window.   Check your output file to see it’s compressed size. Test your video just to make sure the output quality is satisfactory.   Note:  Conversion times can vary greatly depending on the size of the file and your computer hardware. Files that are several GBs in size may take several hours to compress. AutoGK is no longer being actively developed but is still a wonderful DivX/XviD conversion tool. It can also be used to compress and convert non-copy protected DVDs. Downloads AutoGordianKnot DivX (optional) Similar Articles Productive Geek Tips Use Your Mac Mini as a Media Server Part 2Make Disk Cleanup Compress Older(or Newer) Files on XPMysticgeek Blog: Exclusive Look Inside Vreel – Including Interview With Vreel Founder!Friday Fun: Watch HD Video Content with MeevidConvert a DVD Movie Directly to AVI with FairUse Wizard 2.9 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • Large File Upload in SharePoint 2010

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • AWS Large Instance: /mnt does not show all the space that should be available

    - by Emile Baizel
    I just created a Large (m1.large) 64 bit instance which comes with 850 GB instance storage. Look at the Large Instance http://aws.amazon.com/ec2/instance-types/ A 'df -h' from the root folder gives me the output below. The /mnt is where I'm thinking the instance storage is but here it is only showing me 414G. I have set up two servers and both are showing the same numbers. root@ip-11-11-11-11:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 1.1G 6.5G 14% / none 3.7G 112K 3.7G 1% /dev none 3.7G 0 3.7G 0% /dev/shm none 3.7G 48K 3.7G 1% /var/run none 3.7G 0 3.7G 0% /var/lock /dev/sdb 414G 199M 393G 1% /mnt

    Read the article

  • How to pause/resume transfer of large files?

    - by Olivier Lalonde
    I recently had to copy about 20 GB of data split between about 20 files from my laptop to an external hard drive. Since this operation takes quite a while (at ~560kb/s), I was wondering if there was any way to pause the transfer and resume it later (in case, I need to interrupt the transfer). As a side question, is there any performance difference between copying from the terminal vs copying from Nautilus?

    Read the article

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • Developing an analytics's system processing large amounts of data - where to start

    - by Ryan
    Imagine you're writing some sort of Web Analytics system - you're recording raw page hits along with some extra things like tagging cookies etc and then producing stats such as Which pages got most traffic over a time period Which referers sent most traffic Goals completed (goal being a view of a particular page) And more advanced things like which referers sent the most number of vistors who later hit a goal. The naieve way of approaching this would be to throw it in a relational database and run queries over it - but that won't scale. You could pre-calculate everything (have a queue of incoming 'hits' and use to update report tables) - but what if you later change a goal - how could you efficiently re-calculate just the data that would be effected. Obviously this has been done before ;) so any tips on where to start, methods & examples, architecture, technologies etc.

    Read the article

  • Reducing brightness of large areas containing bright colours

    - by intuited
    I do most of my work in either a terminal or a web browser. I prefer my terminals to use bright colours on dark. I would really prefer that web pages tended to look this way as well, but that's not under my control. The problem is that when I switch from a light-on-dark terminal to a dark-on-light web page (like this one), my eyes have to adjust to the overall rise in screen brightness. Apparently this is bad for your eyes, in addition to being painful and annoying. It would seem to be possible for some layer of the interface to adjust the displayed colours for parts of the screen, or perhaps for particular windows, to reduce the brightness of the brighter areas of the screen. Can this be done, possibly with a Compiz extension?

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • Narrowing down my large keyword list for new PPC campaign

    - by gijoemike
    If I have a list of 100 keywords that are candidates for a PPC campaign (my list is actually 1000+). What is the best approach to narrowing this down to the top 5-10 keywords I should start with? I'm also wondering if my top chosen keywords for PPC campaign should be my main keywords for SEO site optimization for organic traffic. I also have another question on this site asking: How does one estimate where a competitor is getting most of their traffic from? Thanks. The website isn't created yet, but will be up in January.

    Read the article

  • protecting css selectors on large website

    - by Tim
    I have content that appears within a corporate website inside an iframe. Several departments contribute their own CSS files to manage the overall UI and design. My problem is that they may use selectors for elements like td (for instance), without notice. Of course that will affect my own content in the frame unless I add a class to every td. I'm just using td as an example: the generic style for any element could change without notice. Is there any method/convention/practice I can use to protect my own styling?

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • Best method to organize/manage dependencies in the VCS within a large solution

    - by SnOrfus
    A simple scenario: 2 projects are in version control The application The test(s) A significant number of checkins are made to the application daily. CI builds and runs all of the automation nightly. In order to write and/or run tests you need to have built the application (to reference/load instrumented assemblies). Now, consider the application to be massive, such that building it is prohibitive in time (an entire day to compile). The obvious side effect here, is that once you've performed a build locally, it is immediately inconsistent with latest. For instance: If I were to sync with latest, and open up one of the test projects, it would not locally build until I built the application. This is the same when syncing to another branch/build/tag. So, in order to even start working, I need to wait a day to build the application locally, so that the assemblies could be loaded - and then those assemblies wouldn't be latest. How do you organize the repository or (ideally) your development environment such that you can continually develop tests against whatever the current build is, or a given specific build, while minimizing building the application as much as possible?

    Read the article

  • Storing lots of large strings with frequent "appends" and few reads

    - by Thiago Moraes
    In my current project, I need to store a very long ASCII string to each instance of a given object. This string will receive an 2 appends per minute and will not be retrieved so frequently. The worst case scenario is a 5-10MB string. I'll have thousands of instances of my object and I'm worried that storing all those strings in the filesystem would not be optimal, but I can't think of a better solution. Can anyone suggest an alternative? Maybe a key-value store? In this case, which one? Any other thoughts?

    Read the article

  • Rendering large and high poly meshes

    - by Aurus
    Consider an huge terrain that has a lot polygons, to render this terrain I thought of following techniques: Using height-map instead of raw meshes: Yes, but I want to create a lot of caves and stuff that simply wont work with height-maps. Using voxels: Yes, but I think that this would be to much since I don't even want to support changing terrain.. Split into multiple chunks and do some sort of LOD with the mesh: Yes, but how would I do that? Tessellation usually creates more detail not less. Precompute the same mesh in lower poly version (like Mudbox does) and depending on the distance it renders one of these meshes: Graphic memory is limited and uploading only the chunks won't solve that problem since the traffic would be too high. IMO the last one sounds really good, but imagine the following process: Upload and render the chunks depending on the current player position. [No problem] Player will walk straight forward Now we maybe have to change on of the low poly chunk with the high poly one So, Remove the low poly chunk and load the high poly chunk [Already to much traffic here, I think] I am not very experienced in graphic programming and maybe the upper process is totally okay but somehow I think it is too much. And how about the disk space it would require.. I think 3 kind of levels would be fine but isn't that also too much? (I am using OpenGL but I don't think that this is important)

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • How to deal with large open worlds?

    - by Mr. Beast
    In most games the whole world is small enough to fit into memory, however there are games where this is not the case, how is this archived, how can the game still run fluid even though the world is so big and maybe even dynamic? How does the world change in memory while the player moves? Examples for this include the TES games (Skyrim, Oblivion, Morrowind), MMORPGs (World of Warcraft), Diablo, Titan Quest, Dwarf Fortress, Far Cry.

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • More Efficient Data Structure for Large Layered Tile Map

    - by Stupac
    It seems like the popular method is to break the map up into regions and load them as needed, my problem is that in my game there are many AI entities other than the player out performing actions in virtually all the regions of the map. Let's just say I have a 5000x5000 map, when I use a 2D array of byte's to render it my game uses around 17 MB of memory, as soon as I change that data structure to a my own defined MapCell class (which only contains a single field: byte terrain) my game's memory consumption rockets up to 400+ MB. I plan on adding layering, so an array of byte's won't cut it and I figure I'd need to add a List of some sort to the MapCell class to provide objects in the layers. I'm only rendering tiles that are on screen, but I need the rest of the map to be represented in memory since it is constantly used in Update. So my question is, how can I reduce the memory consumption of my map while still maintaining the above requirements? Thank you for your time! Here's a few snippets my C# code in XNA4: public static void LoadMapData() { // Test map generations int xSize = 5000; int ySize = 5000; MapCell[,] map = new MapCell[xSize,ySize]; //byte[,] map = new byte[xSize, ySize]; Terrain[] terrains = new Terrain[4]; terrains[0] = grass; terrains[1] = dirt; terrains[2] = rock; terrains[3] = water; Random random = new Random(); for(int x = 0; x < xSize; x++) { for(int y = 0; y < ySize; y++) { //map[x,y] = new MapCell(terrains[random.Next(4)]); map[x,y] = new MapCell((byte)random.Next(4)); //map[x, y] = (byte)random.Next(4); } } testMap = new TileMap(map, xSize, ySize); // End test map setup currentMap = testMap; } public class MapCell { //public TerrainType terrain; public byte terrain; public MapCell(byte itsTerrain) { terrain = itsTerrain; } // the type of terrain this cell is treated as /*public Terrain terrain { get; set; } public MapCell(Terrain itsTerrain) { terrain = itsTerrain; }*/ }

    Read the article

  • Which one is better offline method for large scale application

    - by Manish Pansiniya
    We've a big data management website used by several property. Some of our customers have downtime (they can't access net for an hour or two). We want our site to support offline data viewing and inventory management (typical data search and add/remove) and when the user goes online we can sync the changes back to our central database. Customers use several platforms like Windows, iOS, etc. We've been looking into several different options, here are the major choices - Develop offline web app supported in HTML5. Develop a 'fallback' mechanism and interact with data from the app cache as explained in here (http://www.htmlgoodies.com/html5/tutorials/introduction-to-offline-web- applications-using-) Develop a desktop based cross platform solution. I remember the old MONO which has been popular. Here's a post (What do you suggest for cross platform apps, including web cross-platform-apps-including-web) and another one (Technology choice for cross platform development (desktop and phone)? platform-development-desktop-and-phone?rq=1) I realize the the desktop solution might be hard to maintain and result in some compatibility issues and demand test environments. Can HTML5 handle moderate to high level complexity and data flow? Or would it be better to rely on a desktop based app for better scalability & performance?

    Read the article

  • Writing Large Portions Of Code Then Debugging?

    - by The Floating Brain
    Lately I have been writing a game engine, and I have been writing a lot of "foundation stuff" (standard interfaces, modules, a message system ect.), but I have noticed a pattern, a lot of the stuff is interdependent and I can not debug until everything is done, hence I do not debug for about 3 to 5 hours at a time. I am wondering if this is an acceptable practice for this part of the project, and if not, if anyone can give me some advice? -----Update-----: I downloaded some code metrics tools, and my programs cyclomatic complexity is 1.52 which as I understand it is good, and should correlate to high cohesion, if I am wrong please correct me/

    Read the article

  • Making large scale changes to an economy in a social game

    - by Zach
    Are there any examples or case studies of social games, specifically on Facebook, where the developer has made drastic changes to the economy? I'm specifically interested in examples where the old economy was based off of purchasing items with Facebook credits then moving to a new model where the same inventory or similar inventory is sold with a soft currency. The closest comparisons I've been able to find so far are looking at iOS games that have gone from purchase models to freemium models, but haven't found a comparable scenario in a social game besides larger scale MMO's.

    Read the article

  • Extremely large spike in traffic on the 1st - 4th of every month from mobile browsers

    - by wsanville
    I've noticed that on the 1st - 4th of the recent months (since January), several sites I maintain are getting thousands of requests from mobile browsers, whereas throughout the rest of the month, the numbers are in the single or double digits. Has anybody else noticed this sort of behavior? I don't have the exact user agents logged, but my analysis software (WebTrends) reports the traffic as mostly iPhone/iPad/iPod, Android, and Blackberry.

    Read the article

  • Failed install 12.04 on Intel Hardware Raid with Large Partition (> 2TB)

    - by Michael Wiles
    I have Intel Hardware Raid on the motherboard. I have 10 2 TB HDD that I've configured as RAID 1+0 to be one big 8 TB HDD. Now I'm trying to install ubuntu 12.04 on it. After installing with default desktop installation disk I get a blank screen with a cursor flashing. If I try the alternate guided partitioning option I get error: out of disk. and the grub prompt. If I boot with the rescue disk or such like I can drop into a shell and view the disk. Everything also installs without an issue. Don't know what to do...

    Read the article

  • Large resolution differences

    - by Robin Betka
    I want to develop a game on multiple devices such as PC, Android or IOS. Want it to be in 1080p, but that means a massive scale down for the smartphones. I know how to do that, just render everything on a 1080p rendertarget and then render it on the screen smaller. But what should I do so that the scalling down doesn't look bad and blury? I can't do it vector based or anything because the sprites simply need a specific size. Should I make the sprites power of two size to get some nice mipmapping? And which other settings can I do? Or should I rather go with a lower resolution but then having a little bit worse look PC version? The performance seems not to be a problem for me, so would be sad not using 1080p because of other problems.

    Read the article

  • TSQL Tuesday #15 – Maintaining Your Sanity While Managing Large Environments

    - by Jonathan Kehayias
    This month’s TSQL Tuesday event is being hosted by Pat Wright (Blog | Twitter) and the topic this month is Automation! “ I figured that since many of you out there set a goal this year to blog more and to learn Powershell then this Topic should help in both of those goals. So the topic I have chosen for this month is Automation! It can be Automation with T-SQL or with Powershell or a mix of both. Give us your best tips/tricks and ideas for making our lives easier through Automation.” Automation is...(read more)

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >