Search Results

Search found 85366 results on 3415 pages for 'source file'.

Page 104/3415 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • Tell the linux kernel to put a file in the disk cache?

    - by Rory
    Is there any command to for a file to be read in and loaded into the linux disk cache? This is on an up-to-date debian system. I know in the general case, it's better to let the linux kernel figure this out. But I have an edge case. I have a laptop that has an NFS director mounted, and i want to play a long video file, but I don't want to have a network problem interrupt the playnig. I know that (largeish) file will be read in it's entirety later on. I know that nothing else (really) will be running while playing this video. There is enough free memory to store this file. (I know I could just copy the file into a new tmpfs filesystem, but I'm curious if there's an even shorter way to do it)

    Read the article

  • create fixed length flat file with Java

    - by Leslie
    I have a process that currently runs in a Delphi application that I wrote and I need to convert it to a Java process that will run on our web application. Basically our State Financial (legacy) system requires this file in a specific output. In Delphi it is like this: procedure CreateSHAREJournalFile(AppDate : string; ClassCode : string; BudgetRef : String; AccountNumber : string; FYEStep : integer); var GLFileInfo : TStrings; MPayFormat, HPayFormat, TPayFormat : string;<br> const<br> //this is the fixed length format for each item in the file<br> HeaderFormat = '%-1s%-5s%-10s%-8s%-12s%-10s%-21s%-3s%-71s%-3s%-20s%-1s';<br> DetailFormat = '%-1s%-5s%-9s%-10s%-10s%-10s%-10s%-8s%-6s%-5s%-5s%-5s%-8s%-25s%-10s%-60s%-28s%-66s%-28s';<br> begin<br> try<br>//get the data from the query<br> with dmJMS.qryShare do<br> begin<br> SQL.Clear;<br> SQL.Add('SELECT SUM(TOTHRPAY) As HourPay, SUM(TOTMLPAY) As MilePay, SUM(TOTALPAY) AS TotalPay FROM JMPCHECK INNER JOIN JMPMAIN ON JMPCHECK.JURNUM = JMPMAIN.JURNUM WHERE PANELID LIKE ''' + Copy(AppDate, 3, 6) + '%'' ');<br> if FYEStep > -1 then<br> SQL.Add('AND WARRANTNO = ' + QUotedStr(IntToStr(FYEStep)));<br> Active := True;<br> //assign totals to variables so they can be padded with leading zeros<br> MPayFormat := FieldByName('MilePay').AsString;<br> while length(MPayFormat) < 28 do <br>MPayFormat := '0' + MPayFormat;<br> HPayFormat := FieldByName('HourPay').AsString;<br> while length(HPayFormat) < 28 do <br>HPayFormat := '0' + HPayFormat;<br> TPayFormat := Format('%f' ,[(FieldByName('TotalPay').AsCurrency)]);<br> while length(TPayFormat) < 27 do<br> TPayFormat := '0' + TPayFormat;<br> TPayFormat := '-' + TPayFormat;<br> //create a TStringlist to put each line item into<br> GLFileInfo := TStringList.Create;<br> //add header info using HeaderFormat defined above<br> GLFileInfo.Add(Format(HeaderFormat, ['H', '21801', 'NEXT', FormatDateTime('MMDDYYYY', Today), '', 'ACTUALS', '', 'EXT', '', 'EXT', '', 'N']));<br> //add detail info using DetailFormat defined above<br> GLFileInfo.Add(Format(DetailFormat, ['L', '21801', '1', 'ACTUALS', AccountNumber, '', '1414000000', '111500', '', '01200', ClassCode, '', BudgetRef, '', AccountNumber + '0300', '', MPayFormat, '', MPayFormat]));<br> GLFileInfo.Add(Format(DetailFormat, ['L', '21801', '2', 'ACTUALS', AccountNumber, '', '1414000000', '111500', '', '01200', ClassCode, '', BudgetRef, '', AccountNumber + '0100', '', HPayFormat, '', HPayFormat]));<br> GLFileInfo.Add(Format(DetailFormat, ['L', '21801', '3', 'ACTUALS', '101900', '', '1414000000', '111500', '', '01200', ClassCode, '', BudgetRef, '', '', '', TPayFormat, '', TPayFormat]));<br> //save TStringList to text file<br> GLFileINfo.SaveToFile(ExtractFilePath(Application.ExeName) + 'FileTransfer\GL_' + formatdateTime('mmddyy', Today) + SequenceID + '24400' + '.txt');<br> end;<br> finally<br> GLFileINfo.Free;<br> end; end; is there an equivalent in Java for the Format option? Or the TStringList that saves to a text file? Thanks for any information....haven't done a lot of Java programming! Leslie

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • How do I create an encrypted file system inside a file?

    - by darent
    Recently i've found this interesting tutorial: http://flossstuff.wordpress.com/2011/08/07/using-a-file-as-a-storage-device/ It explains how to create an empty file, format it as ext4, and mount it as a device. I'd like to know if it can be created as an encrypted ext4 file system. I've tried using palimpsest (the disk utility found in System menu) to format the already created file system but it doesn't works as it detects the file system being used. If I try to unmount the file system, it won't work neither because it doesn't detect the device (since it's not a real device like a hardrive or a usb drive). So my question is, is there an option to create the file system encrypted from the begining? I've used these commands: Create an empty file 200Mb size: dd if=/dev/zero of=/path/to/file bs=1M count=200 Make it ext4: mkfs -t ext4 file Mount it in a folder inside my home: sudo mount -o loop file /path/to/mount_point Is there any way the mkfs command creates the ext4 encrypted asking for a decryption password? I'm planing to use this as a way to encrypt files inside Dropbox. Thanks for your time.

    Read the article

  • Reading data in from file

    - by user667430
    Hi Here is link if you want to download application: Simple banking app Text file with data to read I am trying to create a simple banking application that reads in data from a text file. So far i have managed to read in all the customers which there are 20 of them. However when reading in the accounts and transactions stuff it only reads in 20 but there is alot more in the text file. Here is what i have so far. I think it has something to do with the nested for loop in the getNextCustomer method. using System; using System.Collections; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.IO; using System.Linq; using System.Text; using System.Windows.Forms; namespace e_SOFT_Banking { public partial class Form1 : Form { public static ArrayList bankDetails = new ArrayList(); public static ArrayList accDetails = new ArrayList(); public static ArrayList tranDetails = new ArrayList(); string inputDataFile = @"C:\e-SOFT_v1.txt"; const int numCustItems = 14; const int numAccItems = 7; const int numTransItems = 5; public Form1() { InitializeComponent(); setUpBank(); } private void btnShowData_Click_1(object sender, EventArgs e) { showListsOfCust(); } private void setUpBank() { readData(); } private void showListsOfCust() { listBox1.Items.Clear(); foreach (Customer c in bankDetails) listBox1.Items.Add(c.getCustomerNumber() + " " + c.getCustomerTitle() + " " + c.getFirstName() + " " + c.getInitials() + " " + c.getSurname() + " " + c.getDateOfBirth() + " " + c.getHouseNameNumber() + " " + c.getStreetName() + " " + c.getArea() + " " + c.getCityTown() + " " + c.getCounty() + " " + c.getPostcode() + " " + c.getPassword() + " " + c.getNumberAccounts()); foreach (Account a in accDetails) listBox1.Items.Add(a.getAccSort() + " " + a.getAccNumber() + " " + a.getAccNick() + " " + a.getAccDate() + " " + a.getAccCurBal() + " " + a.getAccOverDraft() + " " + a.getAccNumTrans()); foreach (Transaction t in tranDetails) listBox1.Items.Add(t.getDate() + " " + t.getType() + " " + t.getDescription() + " " + t.getAmount() + " " + t.getBalAfter()); } private void readData() { StreamReader readerIn = null; Transaction curTrans; Account curAcc; Customer curCust; bool anyMoreData; string[] customerData = new string[numCustItems]; string[] accountData = new string[numAccItems]; string[] transactionData = new string[numTransItems]; if (readOK(inputDataFile, ref readerIn)) { anyMoreData = getNextCustomer(readerIn, customerData, accountData, transactionData); while (anyMoreData == true) { curCust = new Customer(customerData[0], customerData[1], customerData[2], customerData[3], customerData[4], customerData[5], customerData[6], customerData[7], customerData[8], customerData[9], customerData[10], customerData[11], customerData[12], customerData[13]); curAcc = new Account(accountData[0], accountData[1], accountData[2], accountData[3], accountData[4], accountData[5], accountData[6]); curTrans = new Transaction(transactionData[0], transactionData[1], transactionData[2], transactionData[3], transactionData[4]); bankDetails.Add(curCust); accDetails.Add(curAcc); tranDetails.Add(curTrans); anyMoreData = getNextCustomer(readerIn, customerData, accountData, transactionData); } if (readerIn != null) readerIn.Close(); } } private bool getNextCustomer(StreamReader inNext, string[] nextCustomerData, string[] nextAccountData, string[] nextTransactionData) { string nextLine; int numCItems = nextCustomerData.Count(); int numAItems = nextAccountData.Count(); int numTItems = nextTransactionData.Count(); for (int i = 0; i < numCItems; i++) { nextLine = inNext.ReadLine(); if (nextLine != null) { nextCustomerData[i] = nextLine; if (i == 13) { int cItems = Convert.ToInt32(nextCustomerData[13]); for (int q = 0; q < cItems; q++) { for (int a = 0; a < numAItems; a++) { nextLine = inNext.ReadLine(); nextAccountData[a] = nextLine; if (a == 6) { int aItems = Convert.ToInt32(nextAccountData[6]); for (int w = 0; w < aItems; w++) { for (int t = 0; t < numTItems; t++) { nextLine = inNext.ReadLine(); nextTransactionData[t] = nextLine; } } } } } } } else return false; } return true; } private bool readOK(string readFile, ref StreamReader readerIn) { try { readerIn = new StreamReader(readFile); return true; } catch (FileNotFoundException notFound) { MessageBox.Show("ERROR Opening file (when reading data in)" + " - File could not be found.\n" + notFound.Message); return false; } catch (Exception e) { MessageBox.Show("ERROR Opening File (when reading data in)" + "- Operation failed.\n" + e.Message); return false; } } } } I also have three classes one for customers, one for accounts and one for transactions, which follow in that order. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace e_SOFT_Banking { class Customer { private string customerNumber; private string customerTitle; private string firstName; private string initials; //not required - defaults to null private string surname; private string dateOfBirth; private string houseNameNumber; private string streetName; private string area; //not required - defaults to null private string cityTown; private string county; private string postcode; private string password; private int numberAccounts; public Customer(string theCustomerNumber, string theCustomerTitle, string theFirstName, string theInitials, string theSurname, string theDateOfBirth, string theHouseNameNumber, string theStreetName, string theArea, string theCityTown, string theCounty, string thePostcode, string thePassword, string theNumberAccounts) { customerNumber = theCustomerNumber; customerTitle = theCustomerTitle; firstName = theFirstName; initials = theInitials; surname = theSurname; dateOfBirth = theDateOfBirth; houseNameNumber = theHouseNameNumber; streetName = theStreetName; area = theArea; cityTown = theCityTown; county = theCounty; postcode = thePostcode; password = thePassword; setNumberAccounts(theNumberAccounts); } public string getCustomerNumber() { return customerNumber; } public string getCustomerTitle() { return customerTitle; } public string getFirstName() { return firstName; } public string getInitials() { return initials; } public string getSurname() { return surname; } public string getDateOfBirth() { return dateOfBirth; } public string getHouseNameNumber() { return houseNameNumber; } public string getStreetName() { return streetName; } public string getArea() { return area; } public string getCityTown() { return cityTown; } public string getCounty() { return county; } public string getPostcode() { return postcode; } public string getPassword() { return password; } public int getNumberAccounts() { return numberAccounts; } public void setCustomerNumber(string inCustomerNumber) { customerNumber = inCustomerNumber; } public void setCustomerTitle(string inCustomerTitle) { customerTitle = inCustomerTitle; } public void setFirstName(string inFirstName) { firstName = inFirstName; } public void setInitials(string inInitials) { initials = inInitials; } public void setSurname(string inSurname) { surname = inSurname; } public void setDateOfBirth(string inDateOfBirth) { dateOfBirth = inDateOfBirth; } public void setHouseNameNumber(string inHouseNameNumber) { houseNameNumber = inHouseNameNumber; } public void setStreetName(string inStreetName) { streetName = inStreetName; } public void setArea(string inArea) { area = inArea; } public void setCityTown(string inCityTown) { cityTown = inCityTown; } public void setCounty(string inCounty) { county = inCounty; } public void setPostcode(string inPostcode) { postcode = inPostcode; } public void setPassword(string inPassword) { password = inPassword; } public void setNumberAccounts(string inNumberAccounts) { try { numberAccounts = Convert.ToInt32(inNumberAccounts); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } } } Accounts: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace e_SOFT_Banking { class Account { private string accSort; private Int64 accNumber; private string accNick; private string accDate; //not required - defaults to null private double accCurBal; private double accOverDraft; private int accNumTrans; public Account(string theAccSort, string theAccNumber, string theAccNick, string theAccDate, string theAccCurBal, string theAccOverDraft, string theAccNumTrans) { accSort = theAccSort; setAccNumber(theAccNumber); accNick = theAccNick; accDate = theAccDate; setAccCurBal(theAccCurBal); setAccOverDraft(theAccOverDraft); setAccNumTrans(theAccNumTrans); } public string getAccSort() { return accSort; } public long getAccNumber() { return accNumber; } public string getAccNick() { return accNick; } public string getAccDate() { return accDate; } public double getAccCurBal() { return accCurBal; } public double getAccOverDraft() { return accOverDraft; } public int getAccNumTrans() { return accNumTrans; } public void setAccSort(string inAccSort) { accSort = inAccSort; } public void setAccNumber(string inAccNumber) { try { accNumber = Convert.ToInt64(inAccNumber); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } public void setAccNick(string inAccNick) { accNick = inAccNick; } public void setAccDate(string inAccDate) { accDate = inAccDate; } public void setAccCurBal(string inAccCurBal) { try { accCurBal = Convert.ToDouble(inAccCurBal); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } public void setAccOverDraft(string inAccOverDraft) { try { accOverDraft = Convert.ToDouble(inAccOverDraft); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } public void setAccNumTrans(string inAccNumTrans) { try { accNumTrans = Convert.ToInt32(inAccNumTrans); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } } } Transactions: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace e_SOFT_Banking { class Transaction { private string date; private string type; private string description; private double amount; //not required - defaults to null private double balAfter; public Transaction(string theDate, string theType, string theDescription, string theAmount, string theBalAfter) { date = theDate; type = theType; description = theDescription; setAmount(theAmount); setBalAfter(theBalAfter); } public string getDate() { return date; } public string getType() { return type; } public string getDescription() { return description; } public double getAmount() { return amount; } public double getBalAfter() { return balAfter; } public void setDate(string inDate) { date = inDate; } public void setType(string inType) { type = inType; } public void setDescription(string inDescription) { description = inDescription; } public void setAmount(string inAmount) { try { amount = Convert.ToDouble(inAmount); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } public void setBalAfter(string inBalAfter) { try { balAfter = Convert.ToDouble(inBalAfter); } catch (FormatException invalidInput) { System.Windows.Forms.MessageBox.Show("ERROR" + invalidInput.Message + "Please enter a valid number"); } } } } Any help greatly appreciated.

    Read the article

  • AutoMapper MappingFunction from Source Type of NameValueCollection

    - by REA_ANDREW
    I have had a situation arise today where I need to construct a complex type from a source of a NameValueCollection.  A little while back I submitted a patch for the Agatha Project to include REST (JSON and XML) support for the service contract.  I realized today that as useful as it is, it did not actually support true REST conformance, as REST should support GET so that you can use JSONP from JavaScript directly meaning you can query cross domain services.  My original implementation for POX and JSON used the POST method and this immediately rules out JSONP as from reading, JSONP only works with GET Requests. This then raised another issue.  The current operation contract of Agatha and one of its main benefits is that you can supply an array of Request objects in a single request, limiting the about of server requests you need to make.  Now, at the present time I am thinking that this will not be the case for the REST imlementation but will yield the benefits of the fact that : The same Request objects can be used for SOAP and RST (POX, JSON) The construct of the JavaScript functions will be simpler and more readable It will enable the use of JSONP for cross domain REST Services The current contract for the Agatha WcfRequestProcessor is at time of writing the following: [ServiceContract] public interface IWcfRequestProcessor { [OperationContract(Name = "ProcessRequests")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] Response[] Process(params Request[] requests); [OperationContract(Name = "ProcessOneWayRequests", IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] void ProcessOneWayRequests(params OneWayRequest[] requests); }   My current proposed solution, and at the very early stages of my concept is as follows: [ServiceContract] public interface IWcfRestJsonRequestProcessor { [OperationContract(Name="process")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] [WebGet(UriTemplate = "process/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] Response[] Process(string name, NameValueCollection parameters); [OperationContract(Name="processoneway",IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [WebGet(UriTemplate = "process-one-way/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] void ProcessOneWayRequests(string name, NameValueCollection parameters); }   Now this part I have not yet implemented, it is the preliminart step which I have developed which will allow me to take the name of the Request Type and the NameValueCollection and construct the complex type which is that of the Request which I can then supply to a nested instance of the original IWcfRequestProcessor  and work as it should normally.  To give an example of some of the urls which you I envisage with this method are: http://www.url.com/service.svc/json/process/getweather/?location=london http://www.url.com/service.svc/json/process/getproductsbycategory/?categoryid=1 http://www.url.om/service.svc/json/process/sayhello/?name=andy Another reason why my direction has gone to a single request for the REST implementation is because of restrictions which are imposed by browsers on the length of the url.  From what I have read this is on average 2000 characters.  I think that this is a very acceptable usage limit in the context of using 1 request, but I do not think this is acceptable for accommodating multiple requests chained together.  I would love to be corrected on that one, I really would but unfortunately from what I have read I have come to the conclusion that this is not the case. The mapping function So, as I say this is just the first pass I have made at this, and I am not overly happy with the try catch for detecting types without default constructors.  I know there is a better way but for the minute, it escapes me.  I would also like to know the correct way for adding mapping functions and not using the anonymous way that I have used.  To achieve this I have used recursion which I am sure is what other mapping function use. As you do have to go as deep as the complex type is. public static object RecurseType(NameValueCollection collection, Type type, string prefix) { try { var returnObject = Activator.CreateInstance(type); foreach (var property in type.GetProperties()) { foreach (var key in collection.AllKeys) { if (String.IsNullOrEmpty(prefix) || key.Length > prefix.Length) { var propertyNameToMatch = String.IsNullOrEmpty(prefix) ? key : key.Substring(property.Name.IndexOf(prefix) + prefix.Length + 1); if (property.Name == propertyNameToMatch) { property.SetValue(returnObject, Convert.ChangeType(collection.Get(key), property.PropertyType), null); } else if(property.GetValue(returnObject,null) == null) { property.SetValue(returnObject, RecurseType(collection, property.PropertyType, String.Concat(prefix, property.PropertyType.Name)), null); } } } } return returnObject; } catch (MissingMethodException) { //Quite a blunt way of dealing with Types without default constructor return null; } }   Another thing is performance, I have not measured this in anyway, it is as I say the first pass, so I hope this can be the start of a more perfected implementation.  I tested this out with a complex type of three levels, there is no intended logical meaning to the properties, they are simply for the purposes of example.  You could call this a spiking session, as from here on in, now I know what I am building I would take a more TDD approach.  OK, purists, why did I not do this from the start, well I didn’t, this was a brain dump and now I know what I am building I can. The console test and how I used with AutoMapper is as follows: static void Main(string[] args) { var collection = new NameValueCollection(); collection.Add("Name", "Andrew Rea"); collection.Add("Number", "1"); collection.Add("AddressLine1", "123 Street"); collection.Add("AddressNumber", "2"); collection.Add("AddressPostCodeCountry", "United Kingdom"); collection.Add("AddressPostCodeNumber", "3"); AutoMapper.Mapper.CreateMap<NameValueCollection, Person>() .ConvertUsing(x => { return(Person) RecurseType(x, typeof(Person), null); }); var person = AutoMapper.Mapper.Map<NameValueCollection, Person>(collection); Console.WriteLine(person.Name); Console.WriteLine(person.Number); Console.WriteLine(person.Address.Line1); Console.WriteLine(person.Address.Number); Console.WriteLine(person.Address.PostCode.Country); Console.WriteLine(person.Address.PostCode.Number); Console.ReadLine(); }   Notice the convention that I am using and that this method requires you do use.  Each property is prefixed with the constructed name of its parents combined.  This is the convention used by AutoMapper and it makes sense. I can also think of other uses for this including using with ASP.NET MVC ModelBinders for creating a complex type from the QueryString which is itself is a NameValueCollection. Hope this is of some help to people and I would welcome any code reviews you could give me. References: Agatha : http://code.google.com/p/agatha-rrsl/ AutoMapper : http://automapper.codeplex.com/   Cheers for now, Andrew   P.S. I will have the proposed solution for a more complete REST implementation for AGATHA very soon. 

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • Lending epub files for limited time to users

    - by JP Hellemons
    I am looking for components to build a digital library who lends people epub (ebooks) for about a week. It's like a digital version of the offline old public library. Now I have found several flash (pdf) file streaming solutions. But that would require an active internet link. And like the public library, you are able to take your books to the beach or pool on holiday abroad where you have no connection. So streaming is no option. The other file restriction method I have found was DRM, but that would require a really expensive license of Adobe Content Server 4 which is not suitable for my little hobby project. But it seems that adobe content server and Adobe Digital Editions is the only option at the moment. Or are there open source alternatives?

    Read the article

  • General Purpose ASP.NET Data Source Control

    - by Ricardo Peres
    OK, you already know about the ObjectDataSource control, so what’s wrong with it? Well, for once, it doesn’t pass any context to the SelectMethod, you only get the parameters supplied on the SelectParameters plus the desired ordering, starting page and maximum number of rows to display. Also, you must have two separate methods, one for actually retrieving the data, and the other for getting the total number of records (SelectCountMethod). Finally, you don’t get a chance to alter the supplied data before you bind it to the target control. I wanted something simple to use, and more similar to ASP.NET 4.5, where you can have the select method on the page itself, so I came up with CustomDataSource. Here’s how to use it (I chose a GridView, but it works equally well with any regular data-bound control): 1: <web:CustomDataSourceControl runat="server" ID="datasource" PageSize="10" OnData="OnData" /> 2: <asp:GridView runat="server" ID="grid" DataSourceID="datasource" DataKeyNames="Id" PageSize="10" AllowPaging="true" AllowSorting="true" /> The OnData event handler receives a DataEventArgs instance, which contains some properties that describe the desired paging location and size, and it’s where you return the data plus the total record count. Here’s a quick example: 1: protected void OnData(object sender, DataEventArgs e) 2: { 3: //just return some data 4: var data = Enumerable.Range(e.StartRowIndex, e.PageSize).Select(x => new { Id = x, Value = x.ToString(), IsPair = ((x % 2) == 0) }); 5: e.Data = data; 6: //the total number of records 7: e.TotalRowCount = 100; 8: } Here’s the code for the DataEventArgs: 1: [Serializable] 2: public class DataEventArgs : EventArgs 3: { 4: public DataEventArgs(Int32 pageSize, Int32 startRowIndex, String sortExpression, IOrderedDictionary parameters) 5: { 6: this.PageSize = pageSize; 7: this.StartRowIndex = startRowIndex; 8: this.SortExpression = sortExpression; 9: this.Parameters = parameters; 10: } 11:  12: public IEnumerable Data 13: { 14: get; 15: set; 16: } 17:  18: public IOrderedDictionary Parameters 19: { 20: get; 21: private set; 22: } 23:  24: public String SortExpression 25: { 26: get; 27: private set; 28: } 29:  30: public Int32 StartRowIndex 31: { 32: get; 33: private set; 34: } 35:  36: public Int32 PageSize 37: { 38: get; 39: private set; 40: } 41:  42: public Int32 TotalRowCount 43: { 44: get; 45: set; 46: } 47: } As you can guess, the StartRowIndex and PageSize receive the starting row and the desired page size, where the page size comes from the PageSize property on the markup. There’s also a SortExpression, which gets passed the sorted-by column and direction (if descending) and a dictionary containing all the values coming from the SelectParameters collection, if any. All of these are read only, and it is your responsibility to fill in the Data and TotalRowCount. The code for the CustomDataSource is very simple: 1: [NonVisualControl] 2: public class CustomDataSourceControl : DataSourceControl 3: { 4: public CustomDataSourceControl() 5: { 6: this.SelectParameters = new ParameterCollection(); 7: } 8:  9: protected override DataSourceView GetView(String viewName) 10: { 11: return (new CustomDataSourceView(this, viewName)); 12: } 13:  14: internal void GetData(DataEventArgs args) 15: { 16: this.OnData(args); 17: } 18:  19: protected virtual void OnData(DataEventArgs args) 20: { 21: EventHandler<DataEventArgs> data = this.Data; 22:  23: if (data != null) 24: { 25: data(this, args); 26: } 27: } 28:  29: [Browsable(false)] 30: [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] 31: [PersistenceMode(PersistenceMode.InnerProperty)] 32: public ParameterCollection SelectParameters 33: { 34: get; 35: private set; 36: } 37:  38: public event EventHandler<DataEventArgs> Data; 39:  40: public Int32 PageSize 41: { 42: get; 43: set; 44: } 45: } Also, the code for the accompanying internal – as there is no need to use it from outside of its declaring assembly - data source view: 1: sealed class CustomDataSourceView : DataSourceView 2: { 3: private readonly CustomDataSourceControl dataSourceControl = null; 4:  5: public CustomDataSourceView(CustomDataSourceControl dataSourceControl, String viewName) : base(dataSourceControl, viewName) 6: { 7: this.dataSourceControl = dataSourceControl; 8: } 9:  10: public override Boolean CanPage 11: { 12: get 13: { 14: return (true); 15: } 16: } 17:  18: public override Boolean CanRetrieveTotalRowCount 19: { 20: get 21: { 22: return (true); 23: } 24: } 25:  26: public override Boolean CanSort 27: { 28: get 29: { 30: return (true); 31: } 32: } 33:  34: protected override IEnumerable ExecuteSelect(DataSourceSelectArguments arguments) 35: { 36: IOrderedDictionary parameters = this.dataSourceControl.SelectParameters.GetValues(HttpContext.Current, this.dataSourceControl); 37: DataEventArgs args = new DataEventArgs(this.dataSourceControl.PageSize, arguments.StartRowIndex, arguments.SortExpression, parameters); 38:  39: this.dataSourceControl.GetData(args); 40:  41: arguments.TotalRowCount = args.TotalRowCount; 42: arguments.MaximumRows = this.dataSourceControl.PageSize; 43: arguments.AddSupportedCapabilities(DataSourceCapabilities.Page | DataSourceCapabilities.Sort | DataSourceCapabilities.RetrieveTotalRowCount); 44: arguments.RetrieveTotalRowCount = true; 45:  46: if (!(args.Data is ICollection)) 47: { 48: return (args.Data.OfType<Object>().ToList()); 49: } 50: else 51: { 52: return (args.Data); 53: } 54: } 55: } As always, looking forward to hearing from you!

    Read the article

  • Excel 2007 file writer in C# results in a corrupt file

    - by Martin
    Hi, I am using a BinaryReader to read an Excel 2007 file from an Exchange mailbox using a OWA, the file is then written to disk using a BinaryWriter. My problem is that the two files don't match when the writer finishes. Worse still Excel 2007 won't open the writen file. Previously Excel 2003 has had no problem with the solution below. And Excel 2007 doesn't have an issue if the file is an Excel 2003 format file, only if the file format is Excel 2007 (*.xlsx). BinaryReader: using(System.IO.Stream stream = resource.GetInputStream(attachedFiles[k].Address)) { using(System.IO.BinaryReader br = new System.IO.BinaryReader(stream)) { attachment.Data = new byte[attachedFiles[k].Size]; int bufPosn=0, len=0; while ((len = br.Read( attachment.Data, bufPosn, attachment.Data.Length-bufPosn )) > 0) { bufPosn += len; } br.Close(); } } BinaryWriter: FileStream fs = new FileStream(fileName, FileMode.Create); BinaryWriter binWriter = new BinaryWriter(fs); binWriter.Write( content, 0, content.Length ); binWriter.Close(); fs.Close(); Suggestions gratfully received.

    Read the article

  • batch file to merge .js files from subfolders into one combined file

    - by Andrew Johns
    I'm struggling to get this to work. Plenty of examples on the web, but they all do something just slightly different to what I'm aiming to do, and every time I think I can solve it, I get hit by an error that means nothing to me. After giving up on the JSLint.VS plugin, I'm attempting to create a batch file that I can call from a Visual Studio build event, or perhaps from cruise control, which will generate JSLint warnings for a project. The final goal is to get a combined js file that I can pass to jslint, using: cscript jslint.js < tmp.js which would validate that my scripts are ready to be combined into one file for use in a js minifier, or output a bunch of errors using standard output. but the js files that would make up tmp.js are likely to be in multiple subfolders in the project, e.g: D:\_projects\trunk\web\projectname\js\somefile.debug.js D:\_projects\trunk\web\projectname\js\jquery\plugins\jquery.plugin.js The ideal solution would be to be able to call a batch file along the lines of: jslint.bat %ProjectPath% and this would then combine all the js files within the project into one temp js file. This way I would have flexibility in which project was being passed to the batch file. I've been trying to make this work with copy, xcopy, type, and echo, and using a for do loop, with dir /s etc, to make it do what I want, but whatever I try I get an error.

    Read the article

  • Accessing an excel file throws OleDbException but keeps handle on file

    - by Jonn
    Really odd that I'd get an oledbexception but turns out that the file's handle is still with the original file. I've been searching through google and keep finding the same problem but no solutions. Connection String: "Provider=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + filePath + ";" + "Extended Properties=Excel 8.0;"; Note that it works on every other file except a particular excel file. Exception: System.Data.OleDb.OleDbException: No error information available: E_UNEXPECTED(0x8000FFFF). And then I have exception handling like this: try { IEnumerable<string> worksheetNames = GetWorkbookWorksheetNames(connString); DataSet ds; foreach (string worksheetName in worksheetNames) { OleDbDataAdapter dataAdapter = new OleDbDataAdapter("SELECT * FROM [" + worksheetName + "]", connString); ds = new DataSet(); dataAdapter.Fill(ds, "ExcelInfo"); DataTable dt = ds.Tables["ExcelInfo"]; entityList.AddRange(GetDataFromDataTable(dt, worksheetName)); } } catch (OleDbException ex) { File.Move(filePath, filePath + ".invalidFormat.xls"); } Has anyone else encountered this behavior? And I'm not sure how to handle an error that keeps the handle on the file I'm supposed to process. It sort of freezes everything in place.

    Read the article

  • Pass a variable from the source file to an included file in PHP

    - by Alpha1
    For my website I want to store the general format of the site in a single PHP file in a single location, and each of the different pages content in the local location of the page. I then want to pass the title and content address to the included file via a variable. However I can't get the included format file to read the variables storing the title and content data. AKA, the called file for the individual page would be: <?php $title = 'Some Title'; $source_file = 'content.php'; readfile('http:...../format.php'); ?> The format file would be: <html> ... <title> <?php echo $title; ?> </title> ... <?php include($source_file); ?> ... I recall reading somewhere I need to include something to get the variables at the start of the format file, however I can't remember what it is or find where I found that information.

    Read the article

  • Android File Picker

    - by GuyNoir
    Is there a good solution for picking a file in an android application? I need the user to be able to browse their SD card for a file they would like to load. However, it cannot use an outside application (like andExplorer). It must stay contained inside my application. I saw one a while ago that's hosted on google code and used by the Gameboid, SNESoid and other similar emulators to pick roms. If any one can point me to that one that'd be just as great. Thanks!

    Read the article

  • SQLite file locking and DropBox

    - by Alex Jenter
    I'm developing an app in Visual C++ that uses an SQLite3 DB for storing data. Usually it sits in the tray most of the time. I also would like to enable putting my app in a DropBox folder to share it across several PCs. It worked really well up until DropBox has recently updated itself. And now it says that it "can't sync the file in use". The SQLite file is open in my app, but the lock is shared. There are some prepared statements, but all are reset immediately after using step. Is there any way to enable synchronizing of an open SQLite database file? Thanks! Here is the simple wrapper that I use just for testing (no error handling), in case this helps: class Statement { private: Statement(sqlite3* db, const std::wstring& sql) : db(db) { sqlite3_prepare16_v2(db, sql.c_str(), sql.length() * sizeof(wchar_t), &stmt, NULL); } public: ~Statement() { sqlite3_finalize(stmt); } public: void reset() { sqlite3_reset(stmt); } int step() { return sqlite3_step(stmt); } int getInt(int i) const { return sqlite3_column_int(stmt, i); } tstring getText(int i) const { const wchar_t* v = (const wchar_t*)sqlite3_column_text16(stmt, i); int sz = sqlite3_column_bytes16(stmt, i) / sizeof(wchar_t); return std::wstring(v, v + sz); } private: friend class Database; sqlite3* db; sqlite3_stmt* stmt; }; class Database { public: Database(const std::wstring& filename = L"")) : db(NULL) { sqlite3_open16(filename.c_str(), &db); } ~Database() { sqlite3_close(db); } void exec(const std::wstring& sql) { auto_ptr<Statement> st(prepare(sql)); st->step(); } auto_ptr<Statement> prepare(const tstring& sql) const { return auto_ptr<Statement>(new Statement(db, sql)); } private: sqlite3* db; };

    Read the article

  • SSIS parsing of an irregular flat file?

    - by ElHaix
    I'm pretty familiar with SSIS parsing of regular delimited text data files, however, I'm looking for some advice on an approach to tackle a file that looks like this test file: ISA*00* *00* *01*220220220 *ZZ*RL CODE 01*060327*1212*U*00300*000008859*0*P*:~ GS*RA*CPA-BPT*LOCALUTILITY*060319*1212*970819003*X*003030~ ST*820*000000001~ BPR*C*321.91*C*X12*CBC*04*000300488**9918939***04*000300002**1598564*070319~ TRN*1*00075319970819105029~ REF*RR*0003199708190000174858~ DTM*097*070318~ DTM*107*070318~ N1*PR*DIRECT PAYMENT~ N1*PE*ABC CORPORATE BILLER*ZZ*90005836~ ENT*1~ N1*PR*BILLING - TEST - NATTRASS~ RMR*CR*0009381082105011**142.15~ REF*TN*000303965~ DTM*109*070316~ ENT*2~ N1*PR*BILL FREID TEST~ RMR*CR*0011010451800011**179.76~ REF*TN*000304189~ The 321.91 is the total of the transaction. I would prefer to do this with SSIS, but could also do create a C# parser. Suggestions would be appreciated. Thank you.

    Read the article

  • PHP file download header

    - by skidding
    I have developed a small download system in PHP, where files are downloaded through a proxy file. When I had to do this before, I just redirected by changing the location header; which is not what I want to do now. So, obviously, the first issue that appeared is what kind of header must I set. First of all, Content-Disposition is set as "attachment", so this is good, but I can't seem to get around Content-Type. I need to set it to fit all possible files that might be downloaded through this system. I don't know how to detect the file header automatically, and I'm trying to aviod a GIANT switch. What are my options? Thanks!

    Read the article

  • Broke NetBeans file associations in Windows XP how do I get them back?

    - by Serhiy
    I broke my file association in XP... Does anyone have any clue how to fix it? When I right click and select Open With... the application I want to use (NetBeans) to open the file is not on the list... and when I browse for it it won't let me select it (well it does but then won't add it to the list). The way I broke it is by installing 6.7 and then uninstalling 6.5.... since then my file associations have all been broken. I even tried uninstalling NetBeans and reinstalling it again... no luck... I even went as far as adding my own action called "OpenIt" to the file types I wanted... and that works... but only if the file/folders that contain in don't have any spaces... otherwise NetBeans throws a ".....does not exist, or is not a plain file". Thus nothing off the desktop can be opened... Does anyone know of how I can fix this problem? Thanks.

    Read the article

  • Starting point for learning CAD/CAE file formats?

    - by Escader
    We are developing some stress and strain analysis software at university. Now it's time to move from rectangles and boxes and spheres to some real models. But I still have little idea where to start. In our software we are going to build mesh and then make calculations, but how do I import solid bodies from CAD/CAE software? 1) How CAD/CAE models are organised? How solid bodies are represented? What are the possibilities of DWG, DXF, IGES, STEP formats? There is e.g. a complete DXF reference, but it's too difficult for me to understand without knowing basic concepts. 2) Are there C++ libraries to import solid bodies from CAD/CAE file formats? Won't it be too difficult to build a complete model to be able to import comprehensive file?

    Read the article

  • VBA: Read file from clipboard

    - by ReturningTarzan
    I'm trying to load a file in a VBA macro that has been copied from, say, an Explorer window. I can easily get the data from the clipboard using DataObject::GetFromClipboard, but the VBA interface to DataObject doesn't seem to have methods for working with any other formats than plain text. There are only GetText and SetText methods. If I can't get a file stream directly from the DataObject, the filename(s) would also do, so maybe GetText could be forced to return the name of a file placed on the clipboard? There is very little documentation to be found for VBA anywhere. :( Maybe someone could point me to an API wrapper class for VBA that has this sort of functionality?

    Read the article

  • How to pass a very long string/file into RESTWebservice JAX-RS Jersey

    - by Sashikiran Challa
    Hello All, I am trying to write a webservice that takes in an XML string, does parsing of it using DOM and extract particular things I want. My XML string happens to be very long so I do not want to pass it as a @QueryParam or @PathParam. Say If I write that XML string into a file, How do I go about writing a RESTful service that takes in this file, extracts whatever I want and return the results. I am actually trying to extract some number of strings, so my output should probably be an ArrayList having all these strings. Could somebody please shed some light on how I should go about doing this. Thanks in advance

    Read the article

  • Windows hosts file and IIS binding question

    - by bmw0128
    I'm building a few SharePoint sites, and I want to make use of zones so I may set security different in the various zones. My workstation has a local SharePoint, and I use it for development. My workstation has a static IP, and is connected to the internet. When I make a SharePoint site, I want to add a host header, for instance, devbox.com. I do not own this name, nor do I want to. I then add an entry in my hosts file, but when I surf to http://devbox.com:8080, it does not resolve. Do I need to register the name I want to use, or should this work, i.e., have my hosts file resolve names/IPs first?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >