Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 45/1833 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Sending a large number of mails causing problems on CentOS 6 / Plesk 10

    - by papakost
    I have a VPS running CentOS 6. When the system tries to send daily newsletter, after some time (e.g. after sending about 2000 emails), I get error "Unable to send mail" and the system memory goes really high. Till this moment, the mails are delivered normally. The rest symptoms are: I cannot see anything on /var/log/maillog (File seems not to be written) All files on /var/spool/mail have 0 bytes size. From time to time on httpd log I get errors like: /usr/sbin/sendmail: error while loading shared libraries: libc.so.6: cannot open shared object file: Error 23 "Activate mail service on domain" setting in Plesk is deactivated. Any idea on what's going wrong here?

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • Moving files with batch files from one pc to a server, to a another pc - worried about disk corruption

    - by AnchientAnt
    I use scheduled tasks that calls a batch file, that calls more batch files to move about three files from a pc, to a server, then to multiple other pcs. It all happens very quickly, as they are small files. Are there any pitfalls for how fast these transfers happen? I'm just mildly concerned about causing some disk corruption somehow. I use logic like 1. Call MapToPc if files exist then move file to folder on server. Disconnect 2. Call SendtoPCs If files exist (the files just moved to the server) then MapToPCs Move all files Disconnect All of this happens in about 2 secs or less. edit: this on windows 7, server 2003, xp respectively

    Read the article

  • Recover deleted files on windows 2008 file server

    - by aniga
    We have recently been hit by a weird virus which made all files and folders a system files/folders and also it hid all files and folders par some weird ones it created including: ..exe porn.exe secret.exe password.exe etc We have managed to restore the files with attrib command to unhide and unmark them as system files however we have noticed that we are missing some 4 to 5 folders of which (based on my luck) 2 of them are the two most important client we have. I am not sure if these files were deleted by the worm/virus or by my colleagues who are not owning up to them but the files are now gone. Worst of all, we do not have any backup what so ever (Yes I know, we should not have done that but it is a lesson learned and since last night we have created two forms of backup systems one to external device and one on the cloud, but I doubt any of that will help us now) We have 1 Windows 2008 File server and 4 client computers based on Windows 2007. I would be grateful if anyone can help us on how we can recover from this disaster which could potentially put us out of business.

    Read the article

  • Default profile for large

    - by user63434
    Hi I am setting up a master image to clone to all same machine type Windows 7 client, I login as administrastor and installed all the programs and changed the desktop settings etc, but my local administrator profile is 244megs in size, which will become the default profile of the local machine when sysprep, we have a 2003 server that I want to use mandatory profile for all login users which means I need to copy this profile to the server so when any users login to the domain they are using this profile, loading a 244megs profile is going to be very slow since it will be removed from the client when they logoff. So next time they login it will take a long time again. Is there anything I can do, can I just copy just the bare minimum files from the default profile to the server, as I am not sure what parts I need, I read that I must copy my documents, my documents/pictures so the folder redirection will work. What else do I need to copy to the server? I have firefox xmark sync also and MS words etc. THanks

    Read the article

  • Best way to copy large amount of data between partitions

    - by skinp
    I'm looking to transfer data across 2 lv of an HP-UX server. I have a couple of those transfers to do, some of which are mostly binary (Oracle tablespace...) and some others are more text files (logs...). Used data size of the volumes is between 100Gb and 1Tb. Also, I will be changing the block size from 1K to 8K on some of these partitions... Things I'm looking for: Guarantees data integrity Fastest data transfer speed Keeps file ownership and permissions Right now, I've thought about dd, cp and rsync, but I'm not sure on the best one to use and the best way to use them...

    Read the article

  • OpenFilesView Displays All Open and Locked Files to Help Resolve In-Use Errors

    - by Jason Fitzpatrick
    Windows: You go to move a file and Windows throws up an “In Use” error. OpenFilesView shows you what application or system process is locking up the files you’re trying to move. Sometimes the culprit is obvious; if you go to move your media folder and you’ve got your media player open watching South Park then shutting down the media player is the obvious solution. Other times the culprit is less obvious; sometimes Windows processes and less-than-obvious applications are accessing your files in ways that aren’t apparent. The screenshot below showcases the “In Use” error: This is where OpenFilesView comes into play. Fire up the application to see a list of all active files on your system. The master list is a bit overwhelming (on our test system there were over 1200 open files) but you use the find command to drill down to specific file or folder names. Once you’ve found the locked file you can close the file handle, kill the process, or bring the process to the front (so you can examine the program, if possible, before terminating it). It’s much more efficient than rebooting in an attempt to shake the In-Use error. OpenFilesView is freeware and works on Windows XP through Windows 7. HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • Extract High Quality Icons from Files Using a Free Tool

    - by Lori Kaufman
    If you need to extract an icon from a program file or other type of file (such as .dll files), there are many free tools available that make the task easy. However, very few will extract high quality icon images from the files. Most free icon extraction tools will extract smaller icon image sizes, such as 16×16, 32×32, or 48×48 pixels. Some icons come in larger sizes, such as icons used in Windows. There is a free, small utility, called BeCyIconGrabber, that allows you to view and save icons and cursors of any size contained in .exe, .dll, .icl, .ocx, .cpl, .src, .ico, and .cur files. You can save the extracted icons individually as a .png file, .bmp file, .ico file, or .cur file, or in groups within resource libraries, i.e., .dll or .icl files. BeCyIconGrabber can be downloaded as an installable file or as a portable executable that does not need to be installed. We downloaded the portable file. How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • How do I stop XNA/Visual Studio from rebuilding my content project every time I build?

    - by Phil Quinn
    My group and I are working on a game in XNA 4.0 with Visual Studio 2010/2012. The main solution has 6 projects: 2 XNA game projects (1 executable/ 1 class library), 1 WPF executable for the level editor, 2 standard class libraries, and a content project. Originally, the editor and engine XNA game projects had a content reference to separate content projects. Recently, I consolidated the content projects into one to simplify asset additions. Since pushing these changes to our git repo, certain members of my group have been experiencing weird build issues. Every time they run the project, they have to re-build all of the assets. This happens regardless of whether any changes were made, even if they just run the project directly after building. I've taken a few steps to figure out why this is happening. Below is the MSBuild output set on Normal verbosity. The seemingly important part is at 4, with the line 4> Rebuilding all content because build settings have changed 1>------ Build started: Project: Engine.Core, Configuration: Debug x86 ------ 1>Build started 11/29/2012 3:24:24 AM. 1>ResolveAssemblyReferences: 1> A TargetFramework profile exclusion list will be generated. 1>EmbedXnaFrameworkRuntimeProfile: 1>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 1>GenerateTargetFrameworkMonikerAttribute: 1>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 1>CoreCompile: 1>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 1>XnaWriteCacheFile: 1>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 1>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 1> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 1>_CopyAppConfigFile: 1>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 1>CopyFilesToOutputDirectory: 1> Engine.Core -> <solution-dir>\src\Engine.Core\bin\x86\Debug\TimeSink.Engine.Core.dll 1> 1>Build succeeded. 1> 1>Time Elapsed 00:00:00.13 2>------ Build started: Project: TimeSink.Entities, Configuration: Debug x86 ------ 2>Build started 11/29/2012 3:24:25 AM. 2>ResolveAssemblyReferences: 2> A TargetFramework profile exclusion list will be generated. 2>EmbedXnaFrameworkRuntimeProfile: 2>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 2>GenerateTargetFrameworkMonikerAttribute: 2>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 2>CoreCompile: 2>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 2>XnaWriteCacheFile: 2>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 2>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 2> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 2>CopyFilesToOutputDirectory: 2> TimeSink.Entities -> <solution-dir>\src\TimeSink.Entities\bin\x86\Debug\TimeSink.Entities.dll 2> 2>Build succeeded. 2> 2>Time Elapsed 00:00:00.11 3>------ Build started: Project: Editor (Editor\Editor), Configuration: Debug x86 ------ 4>------ Build started: Project: Engine.Game, Configuration: Debug x86 ------ 3>Build started 11/29/2012 3:24:25 AM. 3>CoreCompile: 3> All content is already up to date 3>ResolveAssemblyReferences: 3> A TargetFramework profile exclusion list will be generated. 3>EmbedXnaFrameworkRuntimeProfile: 3>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 3>GenerateTargetFrameworkMonikerAttribute: 3>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 3>CoreCompile: 3>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 3>XnaWriteCacheFile: 3>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 3>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 3> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 3>_CopyOutOfDateNestedContentItemsToOutputDirectory: 3>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 3>CopyFilesToOutputDirectory: 3> Editor -> <solution-dir>\src\Editor\Editor\bin\x86\Debug\Editor.dll 3> 3>Build succeeded. 3> 3>Time Elapsed 00:00:00.39 4>Build started 11/29/2012 3:24:25 AM. 4>CoreCompile: 4> Rebuilding all content because build settings have changed 4> Building Textures\circle.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Importing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Building Textures\giroux.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Importing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Building Textures\Body_Neutral.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Importing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Building font.spritefont -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4> Importing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.FontDescriptionImporter 4> Processing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.Processors.FontDescriptionProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4>ResolveAssemblyReferences: 4> A TargetFramework profile exclusion list will be generated. 4>EmbedXnaFrameworkRuntimeProfile: 4>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 4>GenerateTargetFrameworkMonikerAttribute: 4>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 4>CoreCompile: 4>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 4>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 4> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 4>_CopyOutOfDateNestedContentItemsToOutputDirectory: 4>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 4>_CopyAppConfigFile: 4>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 4>CopyFilesToOutputDirectory: 4> Engine.Game -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Engine.Game.exe 4>IncrementalClean: 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\circle.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\giroux.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Body_Neutral.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\font.xnb". 4> 4>Build succeeded. 4> 4>Time Elapsed 00:00:01.72 ========== Build: 4 succeeded, 0 failed, 1 up-to-date, 0 skipped ========== I can't think of how build settings could change between consecutive executions. Like I said, this only happens for half our group. One member is on a 32-bit Windows 7 Prof bootcamp partition on a Mac. Everyone else, including those who don't have the issue, are running straight 64-bit Windows 7 Prof. Both have tried using VS 2010 and VS 2012. Any insight would be greatly appreciated. Also, I can post more details upon request if this isn't thorough enough.

    Read the article

  • Windows Live SkyDrive: How To Move or Copy Files Between Folders

    - by Gopinath
    Microsoft has very simple and easy to use interface to move files between folders in Windows Operating system. But their own cloud storage service,Windows Live SkyDrive, complicated these simple and daily used operations. We need a guide to figure out how to perform basic copy/move operations. Couple of years ago we wrote about moving files between folders in old version of SkyDrive but the guide does not hold good today as SkyDrive has gone through many user interface changes in the recent past. Today one of our readers asked us how to move/copy files in the latest version of SkyDrive and here are the steps to be followed 1. Login to your Windows Live SkyDrive 2. Select the file you want to Move or Copy by clicking on the information icon (see 2 in below image) 3. After selecting the information icon, expand Information section displayed on the right side panel to access Move and Copy options (see 3 in the below image). 4. To move the selected file to another folder, select Move option and Sky Drive will guide you through folder selection user interface for choosing the target folder. 5. Once you navigate to the target folder where you want to move the file click on “Move this file into <<Target Folder>>”. 6. You are done. Dear Microsoft, SkyDrive provides us tonnes of free storage but please make it’s user interface a bit better so that we don’t need to write guides to perform basic operations. Hope you listen to your customers. This article titled,Windows Live SkyDrive: How To Move or Copy Files Between Folders, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Save Files Directly from Your Browser to the Cloud in Chrome and Iron

    - by Asian Angel
    Are you looking for a quicker, easier way to upload files you find while browsing to your favorite cloud services? Skip saving files to your hard-drive and transfer them directly from your browser to your accounts using the Cloud Save extension. You can see the cloud services currently supported in the screenshot above and more are being added all the time. So if your favorite is not listed yet just keep checking in at the extension’s homepage. Cloud Save [Google Chrome Extensions] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More Glowing Chess Set Combines LEDs, Chess, and DIY Electronics Fun Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks

    Read the article

  • Using OData to get Mix10 files

    - by Jon Dalberg
    There has been a lot of talk around OData lately (go to odata.org for more information) and I wanted to get all the videos from Mix ‘10: two great tastes that taste great together. Luckily, Mix has exposed the ‘10 sessions via OData at http://api.visitmix.com/OData.svc, now all I have to do is slap together a bit of code to fetch the videos. Step 1 (cut a hole in the box) Create a new console application and add a new service reference. Step 2 (put your junk in the box) Write a smidgen of code: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var files = from f in mix.Files 6: where f.TypeName == "WMV" 7: select f; 8:   9: var web = new WebClient(); 10: 11: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 12:   13: Directory.CreateDirectory(myVideos); 14:   15: files.ToList().ForEach(f => { 16: var fileName = new Uri(f.Url).Segments.Last(); 17: Console.WriteLine(f.Url); 18: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 19: }); 20: } Step 3 (have her open the box) Compile and run. As you can see, the client reference created for the OData service handles almost everything for me. Yeah, I know there is some batch file to download the files, but it relies on cUrl being on the machine – and I wanted an excuse to work with an OData service. Enjoy!

    Read the article

  • String patterns that can be used to filter and group files

    - by Louis Rhys
    One of our application filters files in certain directory, extract some data from it and export a document from the extracted data. The algorithm for extracting the data depends on the file, and so far we use regex to select the algorithm to be used, for example .*\.txt will be processed by algorithm A, foo[0-5]\.xml will be processed by algo B, etc. However now we need some files to be processed together. For example, in one case we need two files, foo.*\.xml and bar.*\.xml. Part of the information to be extracted exist in the foo file, and the other part in the bar file. Moreover, we need to make sure the wild card is compatible. For example, if there are 6 files foo1.xml foo23.xml bar1.xml bar9.xml bar23.xml foo4.xml I would expect foo1 and bar1 to be identified as a group, and foo23 and bar23 as another group. bar9 and foo4 has no pair, so they will not be treated. Now, since the filter is configured by user, we need to have a pattern that can express the above requirement. I don't think you can express meaning like above in standard regex. (foo|bar).*\.xml will match all 6 file above and we can't identify which file is paired for a particular file. Is there any standard pattern that can express it? Or any idea how to modify regex to support this, that can be implemented easily?

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Upgrade to 0.25, files served to uPNP devices cannot play

    - by David Buttrick
    I have a Sony BDP-S390 bluray & network player. I upgraded my Myth server to 0.25. When I browse to the Myth server, and try to play a recording, I get an error message about the file not being payable in the player. Interestingly, the files that i have recorded, and the videos that I have loaded into my Video volume group are .mpg or .mp4. The player shows the filetype that it thinks the file is in it's list, and it claims that these files are AVI files, however none of them are. They are all .mp4 or .mpg files. Thinking that that was just an optical illusion, I went ahead and tried to play a file, but I get an error about the file not being playable. First of all, is there something that I need to do to make the uPNP server know about different filetypes? Is it reporting AVI because it hasn't been told about MPG or MP4? Second, I'd like to help out some more here and collect some logging about the uPNP server in the myth server. I cant seem to find information on how to turn on logging, and there is no mythbackend settings file int /etc/default. Thanks very much.

    Read the article

  • All in a Day's Work: Unblocking Multiple Downloaded Files with a Single Command

    - by Sam Abraham
    Files downloaded using Internet Explorer retain Internet Zone permission level and hence are “Blocked” by default on Windows 7 machines. Honestly, while an added overhead for developers; I really appreciate this feature as it provides a good protection layer for casual web users. My workaround is to simply unblock the downloaded zip file (if download was a zip file) which, in turn, unblocks the files stored within. Today however, I was left with a situation where I had to “Open” and “Copy” the content rather than “Save” a zip file. That of course left me with a few dozen files I have to manually unblock. A few minutes of internet search lead me to the link below which worked like a charm: 1-Download streams.exe from SystInternals - http://technet.microsoft.com/en-us/sysinternals/bb897440.aspx 2-Go to command prompt (cmd.exe) 3-Navigate to where you have streams.exe installed 4-Use command line switches: streams.exe –s –d “<folder path>” This removed the Internet Zone restrictions from all files under “<folder path>” and its subfolders as well. [Deleted :Zone.Identifier:$DATA] References: http://social.technet.microsoft.com/Forums/en-US/itproxpsp/thread/806f0104-1caa-4a66-b504-7a681d1ccb33/

    Read the article

  • Generating HTML Help files based on XML documentation

    - by geekrutherford
    Since discovering the XML commenting features built into .NET years ago I have been using it to help make my code more readable and simpler for other developers to understand exactly what the code is doing. Entering /// preceding a line of code causes Visual Studio to insert "summary" tags.  It also results in additional tags being generated if you are commenting a method with parameters and a return type. I already knew that Intellisense would pick up these comments and display them when coding and selecting properties, methods, etc. from a class.  I also knew that you could set Visual Studio to generate an XML file containing said comments.  Only recently did I begin to wonder if I could generate some kind of readable help files based on these comments I so diligently added. After searching the web I came across NDoc, an open source project which creates documentation for you based on the XML files generated by Visual Studio.  Unfortunately, NDoc has become stale and no longer supported (last release was back in 2005). Fortunately there is a little known tool from Microsoft themselves called "Sandcastle Help File Builder".  This nifty little tool gives you a graphical interface that allows you to specify multiple DLL and XML files from which to generate a MSDN like HTML Help File for your own projects! You can check it out here: http://shfb.codeplex.com/ If you are curious how to set Visual Studio to generate the above reference XML documentation files simply go to your projects property page and edit as shown below (my paths are specific, you can leave yours at the default values):

    Read the article

  • Find the "largest" dense sub matrix in a large sparse matrix

    - by BCS
    Given a large sparse matrix (say 10k+ by 1M+) I need to find a subset, not necessarily continuous, of the rows and columns that form a dense matrix (all non-zero elements). I want this sub matrix to be as large as possible (not the largest sum, but the largest number of elements) within some aspect ratio constraints. Are there any known exact or aproxamate solutions to this problem? A quick scan on Google seems to give a lot of close-but-not-exactly results. What terms should I be looking for? edit: Just to clarify; the sub matrix need not be continuous. In fact the row and column order is completely arbitrary so adjacency is completely irrelevant. A thought based on Chad Okere's idea Order the rows from largest count to smallest count (not necessary but might help perf) Select two rows that have a "large" overlap Add all other rows that won't reduce the overlap Record that set Add whatever row reduces the overlap by the least Repeat at #3 until the result gets to small Start over at #2 with a different starting pair Continue until you decide the result is good enough

    Read the article

  • maintaining a large list in python

    - by Oren
    I need to maintain a large list of python pickleable objects that will quickly execute the following algorithm: The list will have 4 pointers and can: Read\Write each of the 4 pointed nodes Insert new nodes before the pointed nodes Increase a pointer by 1 Pop nodes from the beginning of the list (without overlapping the pointers) Add nodes at the end of the list The list can be very large (300-400 MB), so it shouldnt be contained in the RAM. The nodes are small (100-200 bytes) but can contain various types of data. The most efficient way it can be done, in my opinion, is to implement some paging-mechanism. On the harddrive there will be some database of a linked-list of pages where in every moment up to 5 of them are loaded in the memory. On insertion, if some max-page-size was reached, the page will be splitted to 2 small pages, and on pointer increment, if a pointer is increased beyond its loaded page, the following page will be loaded instead. This is a good solution, but I don't have the time to write it, especially if I want to make it generic and implement all the python-list features. Using a SQL tables is not good either, since I'll need to implement a complex index-key mechanism. I tried ZODB and zc.blist, which implement a BTree based list that can be stored in a ZODB database file, but I don't know how to configure it so the above features will run in reasonable time. I don't need all the multi-threading\transactioning features. No one else will touch the database-file except for my single-thread program. Can anyone explain me how to configure the ZODB\zc.blist so the above features will run fast, or show me a different large-list implementation?

    Read the article

  • Google Code Jam 2010 Large DataSets Take Too Long to Submit

    - by Travis
    Hey Guys, I'm participating in the 2010 code jam and I solved two of the problems for the small data sets, but I'm not even close to solving the large data sets in the 8 minute time frame. I'm wondering if anyone out there has solved the large data set: What hardware were you running on? What language were you running on? What performance tuning techniques did you do on your code to run as fast as possible? I'm writing the solutions in Ruby, which is not my day to day language, and executing them on my Macbook Pro. My solutions for problem A and problem C are on github at http://github.com/tjboudreaux/codejam2010. I'd appreciate any suggestions that you may have. FWIW, I have alot of experience in C++ from college, my primary language is PHP, and my "sandbox" language is Ruby. Was I just a bit ambitious by taking a shot at this in Ruby, not knowing where the language struggles for performance, or does anyone see anything that's a redflag as to why I can't complete the large dataset in time to submit.

    Read the article

  • WCF, IIS6.0 (413) Request Entity Too Large.

    - by Andrew Kalashnikov
    Hello, guys. I've got annoyed problem. I've got WCF service(basicHttpBinding with Transport security Https). This service implements contract which consists 2 methods. LoadData. GetData. GetData works OK!. My client received pachage ~2Mb size without problems. All work correctly. But when I try load data by bool LoadData(Stream data); - signature of method I'll get (413) Request Entity Too Large. Stack Trace: Server stack trace: ? ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) I try this http://blogs.msdn.com/jiruss/archive/2007/04/13/http-413-request-entity-too-large-can-t-upload-large-files-using-iis6.aspx. But it doesn't work! My server is 2003 with IIS6.0. Please help.

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

  • “File does not exist” in apache error log when mod_rewrite is using

    - by Nithin
    I am getting below error in server log, when re-writing the urls. [Fri Jan 25 11:32:57 2013] [error] [client ***IP***] File does not exist: /home/testserver/public_html/testing/flats-in-delhi-for-sale, referer: http://domain.in/testing/flats-in-delhi-for-sale/ I searched very where, but not found any solution ! My .htaccess config is given below: Options +FollowSymLinks Options All -Indexes ErrorDocument 404 http://domain.in/testing/404.php RewriteEngine On #Category Link RewriteRule ^([a-zA-Z]+)-in-([a-zA-Z]+)-([a-zA-Z-]+)/?$ view-category.php?type=$1&dis=$2&cat=$3 [NC,L] #Single Property Link RewriteRule ^([a-zA-Z]+)-in-([a-zA-Z]+)-([a-zA-Z-]+)/([a-zA-Z0-9-]+)/?$ view-property.php?type=$1&district=$2&category=$3&title_alias=$4 [NC,L] I also found similar old dated question, but no answer :( (http://webmasters.stackexchange.com/questions/16606/file-does-not-exist-in-apache-error-log) Thanks in advance for your help. PS: My site is working fine even Apache log is showing the error Nithin

    Read the article

  • Analyze your IIS Log Files - Favorite Log Parser Queries

    The other day I was asked if I knew about a tool that would allow users to easily analyze the IIS Log Files, to process and look for specific data that could easily be automated. My recommendation was that if they were comfortable with using a SQL-like language that they should use Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >