Search Results

Search found 89549 results on 3582 pages for 'large file support'.

Page 11/3582 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to access an encrypted INI file from C on an embedded system with little RAM

    - by Mawg
    I want to encrypt an INI file using a Delphi program on a Windows PC. Then I need to decrypt & access it in C on an embedded system with little RAM. I will do that once & fetch all info; I will not be consutinuously accessing the INI file whenever my program needs data from the file. Any advice as to which encryption to use? Nothing too heavyweight, just good enough for "Security through obscurity" and FOSS for both Delphi & C. And how can I decrypt, get all the info from the INI file - using as little RAM as possible, and then free any allocated RAM? I hope that someone can help. [Update] I am currently using an Atmel UC3, although I am not sure if that will be the final case. It has 512kB falsh & 128kB RAM. For an INI file, I am talking of max 8 sections, with a total of max 256 entries, each max 8 chars. I chose INI (but am not married to it), because i have had major problems in the past when the format of a data fiel changes, no matter whether binary, or text. For tex, I prefer the free format of INI (on PC), but suppose I could switch to line_1=data_1, line_2=data_2 and accept that if I add new fields in future software erleases they must come at the end, even if it is not pretty when read directly by humans. I suppose if I choose a fixed format text file then I never need get more than one line into RAM at a time ...

    Read the article

  • How to distinguish doc, ppt, xls files, without looking at file extension

    - by Shelby. S
    So I was wondering how would you differentiate ppt, xls and doc files from each other in linux regardless of extensions. I tried 'file' but from the looks of it, all of MSOffice files are categorized under the same file type. Similarly I'm having trouble with docx, xlsx and pptx files, since they're essentially all zip files containing a bunch of xml. I also tried a python script importing the magic module, but no go. I'm trying to identify the actual file for a sandbox analysis. And for this specific purpose I need to find the actual file type in order to run it in the sandbox vm (the Windows vm runs everything by extension). Let's say my sample file is labeled as try.exe, but in reality it's just a doc file. My script will rename it as try.exe.doc, which would work fine for doc files. But since linux identifies all MSOffice files as simple DOC files then there's no way to identify ppt or xls files. As a result the sandbox wont' analyze the sample correctly.

    Read the article

  • Create a batch file to copy and rename file

    - by Estate Master
    I have little to no experience in writing batch files. I need to write one that copies a file to a new folder and renames it At the moment, my batch file consists of only this command: COPY ABC.PDF \\Documents As you can see, it only copies the file ABC.pdf to the network folder 'Documents'. However i need to change this so it renames the file ABCxxx.pdf, where xxx is a text variable that i would like to set somewhere in the batch file. For example, if xxx = "_Draft", then file would be renamed ABC_Draft.pdf after it is copied. Thanks

    Read the article

  • PHP bitwise left shifting 32 spaces problem and bad results with large numbers arithmetic operations

    - by Victor Stanciu
    Hello, I have the following problems: First: I am trying to do a 32-spaces bitwise left shift on a large number, and for some reason the number is always returned as-is. For example: echo(516103988<<32); // echoes 516103988 Because shifting the bits to the left one space is the equivalent of multiplying by 2, i tried multiplying the number by 2^32, and it works, it returns 2216649749795176448. Second: I have to add 9379 to the number from the above point: printf('%0.0f', 2216649749795176448 + 9379); // prints 2216649749795185920 Should print: 2216649749795185827

    Read the article

  • dictionary interface for large data sets

    - by Richard
    I have a set of key/values (all text) that is too large to load in memory at once. I would like to interact with this data via a Python dictionary-like interface. Does such a module already exist? Reading key values should be efficient and values compressed on disk to save space.

    Read the article

  • Fast ruby http library for large XML downloads

    - by Vlad Zloteanu
    I am consuming various XML-over-HTTP web services returning large XML files ( 2MB). What would be the fastest ruby http library to reduce the 'downloading' time? Required features: both GET and POST requests gzip/deflate downloads (Accept-Encoding: deflate, gzip) - very important I am thinking between: open-uri Net::HTTP curb but you can also come with other suggestions. P.S. To parse the response, I am using a pull parser from Nokogiri, so I don't need an integrated solution like rest-client or hpricot.

    Read the article

  • Reading large excel file with PHP

    - by Itamar Bar-Lev
    I'm trying to read a 17MB excel file (2003) with PHPExcel1.7.3c, but it crushes already while loading the file, after exceeding the 120 seconds limit I have. Is there another library that can do it more efficiently? I have no need in styling, I only need it to support UTF8. Thanks for your help

    Read the article

  • How to export large data to Excel

    - by mavera
    I have a criteria page in my asp.net application. When user clicks report button, firstly in a new page results are binded to a datagrid, then this page is exported to excel file with changing content type method. That normally works, but when large amount of data comes, system.outofmemoryexception is thrown. Does anyone know a way to fix this problem, or another usefull technic to do?

    Read the article

  • Python: Huge file reading by using linecache Vs normal file access open()

    - by user335223
    Hi, I am in a situation where multiple threads reading the same huge file with mutliple file pointers to same file. The file will have atleast 1 million lines. Eachline's length varies from 500 characters to 1500 characters. There won't "write" operations on the file. Each thread will start reading the same file from different lines. Which is the efficient way..? Using the Python's linecache or normal readline() or is there anyother effient way?

    Read the article

  • Viewing large XML files in eclipse?

    - by Paul Wicks
    I'm working on a project involving some large XML files (from 50MB to over 1GB) and it would be nice if I could view them in eclipse (simple text view is fine) without Java running out of heap space. I've tried tweaking the amount of memory available to the jvm in eclipse.ini but haven't had much success. Any ideas?

    Read the article

  • Processing large (over 1 Gig) files in PHP using stream_filter_*

    - by mike
    $fp_src=fopen('file','r'); $filter = stream_filter_prepend($fp_src, 'convert.iconv.ISO-8859-1/UTF-8'); while(fread($fp_src,4096)){ ++$count; if($count%1000==0) print ftell($fp_src)."\n"; } When I run this the script ends up consuming ~ 200 MB of RAM after going through just 35MB of the file. Running it without the stream_filter zips right through with a constant memory footprint of ~10 MB. What gives?

    Read the article

  • How/where to run the algorithm on large dataset?

    - by niko
    I would like to run the PageRank algorithm on graph with 4 000 000 nodes and around 45 000 000 edges. Currently I use neo4j graph databse and classic relational database (postgres) and for software projects I mostly use C# and Java. Does anyone know what would be the best way to perform a PageRank computation on such graph? Is there any way to modify the PageRank algorithm in order to run it at home computer or server (48GB RAM) or is there any useful cloud service to push the data along the algorithm and retrieve the results? At this stage the project is at the research stage so in case of using cloud service if possible, would like to use such provider that doesn't require much administration and service setup, but instead focus just on running the algorith once and get the results without much overhead administration work.

    Read the article

  • Managing large downloadable content on mobile devices

    - by larromba
    This is a general question of how to best manage large downloadable content on mobile devices. Lets consider a situation whereby a mobile app needs to download a number of very large content items, like HD videos, that are over 500MB but under 2GB. Now, lets assume this content delivery system should be scalable. Would it be a fair assumption that: A reputable cloud service would be needed - if so, what is a reliable and cost effective cloud service for mobile devices based on anyone's experience? Large content downloads should only be attempted over a wifi connection, so the end user doesn't incur large costs, e.g. when travelling. Downloads should carry on in the background if possible, as the user won't want to wait in an app for long periods. If the downloads don't finish, or the OS quits the app, all downloads should carry on when the app is next activated? Are there any other pitfalls anyone may have experienced when managing large content on mobile devices? Thanks.

    Read the article

  • How much to charge for being available for support?

    - by Tvrdy
    I am working in small development firm, our main products are ASP.NET and SQL server based. We are deploying a new version of our product and client wants 4-hour response time technical support. Some of this support time will fall outside regular business hours and our boss asked us how much should we charge for being available outside regular business hours? Is it 50% of our hourly fee, 20% ? Any suggestion is appreciated, he is pushing us to speak up and I don't have any idea. I don't want to underprice my free time. Thanks.

    Read the article

  • best practices question: How to save a collection of images and a java object in a single file? File

    - by Richard
    Hi all, I am making a java program that has a collection of flash-card like objects. I store the objects in a jtree composed of defaultmutabletreenodes. Each node has a user object attached to it with has a few string/native data type parameters. However, i also want each of these objects to have an image (typical formats, jpg, png etc). I would like to be able to store all of this information, including the images and the tree data to the disk in a single file so the file can be transferred between users and the entire tree, including the images and parameters for each object, can be reconstructed. I had not approached a problem like this before so I was not sure what the best practices were. I found XLMEncoder (http://java.sun.com/j2se/1.4.2/docs/api/java/beans/XMLEncoder.html) to be a very effective way of storing my tree and the native data type information. However I couldn't figure out how to save the image data itself inside of the XML file, and I'm not sure it is possible since the data is binary (so restricted characters would be invalid). My next thought was to associate a hash string instead of an image within each user object, and then gzip together all of the images, with the hash strings as the names and the XMLencoded tree in the same compmressed file. That seemed really contrived though. Does anyone know a good approach for this type of issue? THanks! Thanks!

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • How to uncompress a 9GB file in Windows FAT32

    - by Kashif
    I have a 2GB RAR file that contains a 9GB video file. I'm using a FAT32 file system. Now I want to unzip that file but after 4GB I get an error due to the FAT32 file size limit. Now I want to know that how I can extract that video? I know that one way is to convert my partition to NTFS but I don't want to follow that way. I've also tried 7-zip but that again gives error after 4GB. One other way is to split that file but I don't know how I can split a video file that is zipped. So any idea please? How can I get rid of this problem.

    Read the article

  • cordova 2.0.0 download file Android

    - by N0rA
    I tried to use cordova 2.0.0 API FileTransfer().download() and i got 2 consecutive errors 1) download error source file 2) download error target file here is my code... function downloadMaterial() { var fileTransfer = new FileTransfer(); var serverURL = 'http://img.youm7.com/images/NewsPics/large/S2200921125437.jpg'; var uri = encodeURI(serverURL); var filePath = persistent_root.fullPath+"/" + fileName; //filePath = file:///data/data/com.example.studyapp/S2200921125437.jpg var onSuccess = function (entry) { console.log("download complete: " + entry.fullPath); }; var onError = function (error) { console.log("download error source " + error.source); console.log("download error target " + error.target); console.log("upload error code " + error.code); }; fileTransfer.download(uri, filePath , onSuccess, onError); } Do you have any idea what should I do? Thanks in advance ...

    Read the article

  • File size and mime-type before upload

    - by Marcos Placona
    Hi, I've scavenged on every single topic on this forum to try and find an answer before I posted this. Most people say you should simply use SWFUpload, some others mention Activex, and it keeps going. I know this is do-able, as Google does it with gMail when you try to upload a file that's bigger than 25mb, or executable. My question is, how can I determine the file size and mime-type before the file actually hits my server. I primarily thought it was an impossible task, but Google proves me wrong. Could anyone give me a definitive answer on how to accomplish such task? Thanks

    Read the article

  • Reading a large file into Perl array of arrays and manipulating the output for different purposes

    - by Brian D.
    Hello, I am relatively new to Perl and have only used it for converting small files into different formats and feeding data between programs. Now, I need to step it up a little. I have a file of DNA data that is 5,905 lines long, with 32 fields per line. The fields are not delimited by anything and vary in length within the line, but each field is the same size on all 5905 lines. I need each line fed into a separate array from the file, and each field within the line stored as its own variable. I am having no problems storing one line, but I am having difficulties storing each line successively through the entire file. This is how I separate the first line of the full array into individual variables: my $SampleID = substr("@HorseArray", 0, 7); my $PopulationID = substr("@HorseArray", 9, 4); my $Allele1A = substr("@HorseArray", 14, 3); my $Allele1B = substr("@HorseArray", 17, 3); my $Allele2A = substr("@HorseArray", 21, 3); my $Allele2B = substr("@HorseArray", 24, 3); ...etc. My issues are: 1) I need to store each of the 5905 lines as a separate array. 2) I need to be able to reference each line based on the sample ID, or a group of lines based on population ID and sort them. I can sort and manipulate the data fine once it is defined in variables, I am just having trouble constructing a multidimensional array with each of these fields so I can reference each line at will. Any help or direction is much appreciated. I've poured over the Q&A sections on here, but have not found the answer to my questions yet. Thanks!! -Brian

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >