Search Results

Search found 9494 results on 380 pages for 'least squares'.

Page 45/380 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Signable, streamable, "readable" archive format?

    - by alexvoda
    Is there any archive format that offers the following: be digitally sign-able with a digital certificate from a trusted source like Verisign - for preventing changes to the file (I am not referring to read only, but in case the file was changed it should no longer be signed telling the user this is not the original file) be stream-able - be able to be opened even if not all of the content has been transfered (also not strictly linearly) be "readable" - be able to read the data without extracting to a temporary folder (AFAIK if you open a file in a zip archive it is extracted first, and this stays true even for zip based formats like OOXML. This is not what I want) be portable - support on at least Windows, Linux and Mac OS X is a must, or at least future support be free of patents - Be open source - also preferably a license that allows commercial use(as far as i know GPL a share-alike licence so it doesn't allow comercial use, BSD on the other hand alows it) Note: Though it may come in handy eventually I can not think right now of a scenario that would require both point 1 and point 2 simultaneously. Or lets leave it a be able to check the signature only when the whole file was downloaded. I am not interested in: being able to be compressed being supported on legacy systems Does any existing archive format fit this description (tar evolutions like DAR and pax come to mind) ? If there is, are there programing libraries available for the above mentioned OSs? If not, would it be hard to create such a thing? EDIT: clarrified piont 5 EDIT 2: added a note to clarify point 1 and 2 P.S.: This is my first question on StackOverflow

    Read the article

  • How to convert many thousands of lines of VBScript to C#?

    - by Ross Patterson
    I have a collection of about 10,000 small VBScript programs (50-100 lines each) and a small collection of larger ones, and I'm looking for a way to convert them to C# without resorting to by-hand transliteration. The programs are automated test cases for a web application, written for HP/Mercury's QuickTest Pro, and I'm trying to turn them into test cases for Selenium. Luckily, the tests appear to be well-written, using a library of building blocks and idioms (the larger programs), so the test cases actually resemble a domain-specific language more than they do VBScript, and the QTP-ness is well-buried inside the libraries. Ideally, what I'm searching for is a tool that can do the syntactic transformation from VBScript to C# for both the dsl-ish test cases and also the more complicated building-block libraries. That would leave me with a manual cleanup of the libraries, and probably very little work on the test cases. If I could find a VBScript-to-VB.NET translator, I'd take that also, as I suspect I could compile the VB.NET and then de-compile to C# using .NET Relector or something similar. Plan B is to write a translator of my own for the test cases, since they're in a very straight-line style, but it wouldn't help with the libraries. Any suyggestions? I haven't written a compiler in at least 15 years, and while I haven't forgotten how, I'm not looking forward to it - least of all for VBScript!

    Read the article

  • OO model for nsxmlparser when delegate is not self

    - by richard
    Hi, I am struggling with the correct design for the delegates of nsxmlparser. In order to build my table of Foos, I need to make two types of webservice calls; one for the whole table and one for each row. It's essentially a master-query then detail-query, except the master-query-result-xml doesn't return enough information so i then need to query the detail for each row. I'm not dealing with enormous amounts of data. Anyway - previously I've just used NSXMLParser *parser = [[NSXMLParser alloc]init]; [parser setDelegate:self]; [parser parse]; and implemented all the appropriate delegate methods in whatever class i'm in. In attempt at cleanliness, I've now created two separate delegate classes and done something like: NSXMLParser *xp = [[NSXMLParser alloc]init]; MyMasterXMLParserDelegate *masterParserDelegate = [[MyMasterXMLParser]alloc]init]; [xp setDelegate:masterParserDelegate]; [xp parse]; In addition to being cleaner (in my opinion, at least), it also means each of the -parser:didStartElement implementations don't spend most of the time trying to figure out which xml they're parsing. So now the real crux of the problem. Before i split out the delegates, i had in the main class that was also implementing the delegate methods, a class-level NSMutableArray that I would just put my objects-created-from-xml in when -parser:didEndElement found the 'end' of each record. Now the delegates are in separate classes, I can't figure out how to have the -parser:didEndElement in the 'detail' delegate class "return" the created object to the calling class. At least, not in a clean OO way. I'm sure i could do it with all sorts of nasty class methods. Does the question make sense? Thanks.

    Read the article

  • Reversible numerical calculations in Prolog

    - by user8472
    While reading SICP I came across logic programming chapter 4.4. Then I started looking into the Prolog programming language and tried to understand some simple assignments in Prolog. I found that Prolog seems to have troubles with numerical calculations. Here is the computation of a factorial in standard Prolog: f(0, 1). f(A, B) :- A > 0, C is A-1, f(C, D), B is A*D. The issues I find is that I need to introduce two auxiliary variables (C and D), a new syntax (is) and that the problem is non-reversible (i.e., f(5,X) works as expected, but f(X,120) does not). Naively, I expect that at the very least C is A-1, f(C, D) above may be replaced by f(A-1,D), but even that does not work. My question is: Why do I need to do this extra "stuff" in numerical calculations but not in other queries? I do understand (and SICP is quite clear about it) that in general information on "what to do" is insufficient to answer the question of "how to do it". So the declarative knowledge in (at least some) math problems is insufficient to actually solve these problems. But that begs the next question: How does this extra "stuff" in Prolog help me to restrict the formulation to just those problems where "what to do" is sufficient to answer "how to do it"?

    Read the article

  • Numerical calculations in Prolog

    - by user8472
    While reading SICP I came across logic programming chapter 4.4. Then I started looking into the Prolog programming language and tried to understand some simple assignments in Prolog. I found that Prolog seems to have troubles with numerical calculations. Here is the computation of a factorial in standard Prolog: f(0, 1). f(A, B) :- A > 0, C is A-1, f(C, D), B is A*D. The issues I find is that I need to introduce two auxiliary variables (C and D), a new syntax (is) and that the problem is non-reversible (i.e., f(5,X) works as expected, but f(X,120) does not). Naively, I expect that at the very least C is A-1, f(C, D) above may be replaced by f(A-1,D), but even that does not work. My question is: Why do I need to do this extra "stuff" in numerical calculations but not in other queries? I do understand (and SICP is quite clear about it) that in general information on "what to do" is insufficient to answer the question of "how to do it". So the declarative knowledge in (at least some) math problems is insufficient to actually solve these problems. But that begs the next question: How does this extra "stuff" in Prolog help me to restrict the formulation to just those problems where "what to do" is sufficient to answer "how to do it"?

    Read the article

  • How can I optimize the SELECT statement running on an Oracle database?

    - by Elvis Lou
    I have a SELECT statement in ORACLE: SELECT COUNT(DISTINCT ds1.endpoint_msisdn) multiple30, dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status) subscription_status FROM daily_summary ds1 join daily_summary ds2 ON ds1.endpoint_msisdn = ds2.endpoint_msisdn, daily_summary_static dss1, daily_summary_static dss2, (SELECT NULL subscription_status FROM dual UNION ALL SELECT -2 subscription_status FROM dual) x WHERE ds1.summary_ts >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND ds1.summary_ts <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss1.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss2.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss2.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.service <> dss2.service AND ( dss1.company_scope = 2 OR dss1.company_scope = 5 ) AND ( dss2.company_scope = 2 OR dss2.company_scope = 5 ) AND dss1.company_scope = dss2.company_scope AND ds1.endpoint_noc_id = dss1.endpoint_noc_id AND ds1.endpoint_host_id = dss1.endpoint_host_id AND ds1.endpoint_instance_id = dss1.endpoint_instance_id AND ds2.endpoint_noc_id = dss2.endpoint_noc_id AND ds2.endpoint_host_id = dss2.endpoint_host_id AND ds2.endpoint_instance_id = dss2.endpoint_instance_id AND dss1.endpoint_provisioning_id = dss2.endpoint_provisioning_id AND Least(1, ds1.total_actions) = 1 AND Least(1, ds2.total_actions) = 1 GROUP BY dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status); This query took about 26 minutes to return in my environment, but if I remove the section: dss1.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss1.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND dss2.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss2.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND it only took 20 seconds to run. We have index on the column last_active, I don't know why the section slow down the performance so much? any ideas?

    Read the article

  • PHP regex for password validation

    - by Fabio Anselmo
    I not getting the desired effect from a script. I want the password to contain A-Z, a-z, 0-9, and special chars. A-Z a-z 0-9 2 special chars 2 string length = 8 So I want to force the user to use at least 2 digits and at least 2 special chars. Ok my script works but forces me to use the digits or chars back to back. I don't want that. e.g. password testABC55$$ is valid - but i don't want that. Instead I want test$ABC5#8 to be valid. So basically the digits/special char can be the same or diff - but must be split up in the string. PHP CODE: $uppercase = preg_match('#[A-Z]#', $password); $lowercase = preg_match('#[a-z]#', $password); $number = preg_match('#[0-9]#', $password); $special = preg_match('#[\W]{2,}#', $password); $length = strlen($password) >= 8; if(!$uppercase || !$lowercase || !$number || !$special || !$length) { $errorpw = 'Bad Password';

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • How do the major C# DI/IoC frameworks compare?

    - by Slomojo
    At the risk of stepping into holy war territory, What are the strengths and weaknesses of these popular DI/IoC frameworks, and could one easily be considered the best? ..: Ninject Unity Castle.Windsor Autofac StructureMap Are there any other DI/IoC Frameworks for C# that I haven't listed here? In context of my use case, I'm building a client WPF app, and a WCF/SQL services infrastructure, ease of use (especially in terms of clear and concise syntax), consistent documentation, good community support and performance are all important factors in my choice. Update: The resources and duplicate questions cited appear to be out of date, can someone with knowledge of all these frameworks come forward and provide some real insight? I realise that most opinion on this subject is likely to be biased, but I am hoping that someone has taken the time to study all these frameworks and have at least a generally objective comparison. I am quite willing to make my own investigations if this hasn't been done before, but I assumed this was something at least a few people had done already. Second Update: If you do have experience with more than one DI/IoC container, please rank and summarise the pros and cons of those, thank you. This isn't an exercise in discovering all the obscure little containers that people have made, I'm looking for comparisons between the popular (and active) frameworks.

    Read the article

  • Automating Excel 2010 using F#

    - by Clive Norman
    I have been searching for a FAQ to tell me how to open a Excel Workbook/Worksheet and also how to Save the File once I have finished. I notice that in most FAQ and all the books I have purchased on F# one is show how to create a new Workbook/Worksheet but is never shown how to either open or Save it. Being a newbie to F# I would very much appreciate it if anyone could kindly provide me with either an answer or perhaps a few pointers? Update As for why F# and not C# or VB? I am pleased to say that inspite of being a newbie (with the exception of Forth, VBA & Excel 2003, 2007 & 2010 and Visual Basic) I can do this in both VB, VBA & C# and since I've been retired on medical grounds, with plenty of time unfortunately on my hands, I like to continually set myself challenges to keep my little grey cells active and being a sucker for trying new languages....well! F# is now an intergral part of Visual Studio 2010 so I thought - why not. Consider this - if we are not willing to use or at least try a new languages - I would always be wonder if I might have prefer it to VBA, VB, C# ..... and if you look at it from another point of view, if no one is going to use it - why create it in the first place? I suppose you can say if cave men hadn't experimented and made fire by rubbing two sticks together - where would we be now and would matches have been invented? Although an complete answer would be good, I prefer a few pointers, to keep my challenge going. And lastly but not least - thank you for taking the trouble to respond!

    Read the article

  • java programming

    - by Baiba
    ok i have version of t code, please tell me what i need to do when i need to get out of program The INDEX NUMBER OF COLUMN IN WHICH ARE LEAST ZEROS? class Uzd{ public static void main(String args[]){ int mas[][]= {{3,4,7,5,0}, {4,5,3,0,1}, {8,2,4,0,3}, {7,0,2,0,1}, {0,0,1,3,0}}; int nul_mas[] = new int[5]; int nul=0; for(int j=0;j<5;j++){// nul=0; for(int i=0;i<5;i++){ if(mas[i][j]==0){ nul++; } } nul_mas[j]=nul; } for(int i=0;i<5;i++){ for(int j=0;j<5;j++){ System.out.print(mas[i][j]); } System.out.println(); } System.out.println();// atstarpe System.out.println("///zeros in each column///"); for(int i=0;i<5;i++){System.out.print(nul_mas[i]);} System.out.println(); }} and after running it shows: 34750 45301 82403 70201 00130 ///zeros in each column/// But i need not in each column but i need to get out index of column in which zeros are least! in this situation it is column nubmer 2!! 12032

    Read the article

  • To implement a remote desktop sharing solution

    - by Cameigons
    Hi, I'm on planning/modeling phase to develop a remote desktop sharing solution, which must be web browser based. In other words: an user will be able to see and interact with someone's remote desktop using his web-browser. Everything the user who wants to share his desktop will need, besides his browser, is installing an add-in, which he's going to be prompted about when necessary. The add-in is required since (afaik) no browser technology allows desktop control from an app running within the browser alone. The add-in installation process must be as simple and transparent as possible to the user (similar to AdobeConnectNow, in case anyone's acquainted with it). The user can share his desktop with lots of people at the same time, but concede desktop control to only one of them at a time(makes no sense being otherwise). Project requirements: All technology employed must be open-source license compatible Both front ends are going to be in flash (browser) Must work on Linux, Windows XP(and later) and MacOSX. Must work at least with IE7(and later) and Firefox3.0(and later). At the very least, once the sharer's stream hits the server from where it'll be broadcast, hereon it must be broadcasted in flv (so I'm thinking whether to do the encoding at the client's machine (the one sharing the desktop) or send it in some other format to the server and encode it there). Performance and scalability are important: It must be able to handle hundreds of dozens of users(one desktop sharer, the rest viewers) We'll definitely be using red5. My doubts concern mostly implementing the desktop publisher side (add-in and streamer): 1) Are you aware of other projects that I could look into for ideas? (I'm aware of bigbluebutton.org and code.google.com/p/openmeetings) 2) Should I base myself on VNC ? 3) Bearing in mind the need to have it working cross-platform, what language should I go with? (My team is very used with java and I have some knowledge of C/C++, but anything goes really). 4) Any other advices are appreciated.

    Read the article

  • jQuery $.ajax calls success handler when reuqest fails because of browser reloading

    - by Martin
    I have the following code: $.ajax({ type: "POST", url: url, data: sendable, dataType: "json", success: function(data) { if(customprocessfunc) customprocessfunc(data); }, error: function(XMLHttpRequest, textStatus, errorThrown){ // error handler here } }); I have a timer which makes AJAX requests often. If I do not receive anything in 'data', I show an error message to the user - it means, something wnet bad on the server. The problem is when user reloads the page while the AJAX call is in progress. I can see in the firebug that the AJAX call fails (URL is colored red and no HTTP status is displayed) so I expect that jQuery will stop the reuqest or at least go to the error handler. But it goes to the success handler and passes null in the 'data' variable. As a result, when user reloads the page, sometimes he can see my big red message about unknown error (because data is null). Is there any way to make jQuery abort the request on complete reloading all at least not to call my success function? I have no way to know in the success handler why the data is null - did it came empty from the server or the call was aborted because of reload.

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Overriding form submit based on counting elements with jquery.each

    - by MrGrigg
    I am probably going about this all wrong, but here's what I'm trying to do: I have a form that has approximately 50 select boxes that are generated dynamically based on some database info. I do not have control over the IDs of the text boxes, but I can add a class to each of them. Before the form is submitted, the user needs to select at least one item from the select box, but no more than four. I'm a little bit sleepy, and I'm unfamiliar with jQuery overall, but I'm trying to override $("form").submit, and here's what I'm doing. Any advice or suggestions are greatly appreciated. $("form").submit(function() { $('.sportsCoachedValidation').each(function() { if ($('.sportsCoachedValidation :selected').text() != 'N/A') { sportsSelected++ } }); if (sportsSelected >= 1 && sportsSelected <= 4) { return true; } else if (sportsSelected > 4) { alert('You can only coach up to four sports.'); sportsSelected = 0; return false; } else { alert('Please select at least one coached sport.'); sportsSelected = 0; return false; } });

    Read the article

  • How to programmatically create a node in Drupal 8?

    - by chapka
    I'm designing a new module in Drupal 8. It's a long-term project that won't be going public for a few months at least, so I'm using it as a way to figure out what's new. In this module, I want to be able to programmatically create nodes. In Drupal 7, I would do this by creating the object, then calling "node_submit" and "node_save". These functions no longer exist in Drupal 8. Instead, according to the documentation, "Modules and scripts may programmatically submit nodes using the usual form API pattern." I'm at a loss. What does this mean? I've used Form API to create forms in Drupal 7, but I don't get what the docs are saying here. What I'm looking to do is programmatically create at least one and possibly multiple new nodes, based on information not taken directly from a user-presented form. I need to be able to: 1) Specify the content type 2) Specify the URL path 3) Set any other necessary variables that would previously have been handled by the now-obsolete node_object_prepare() 4) Commit the new node object I would prefer to be able to do this in an independent, highly abstracted function not tied to a specific block or form. So what am I missing?

    Read the article

  • Do the 'up to date' guarantees provided by final field in Java's memory model extend to indirect ref

    - by mattbh
    The Java language spec defines semantics of final fields in section 17.5: The usage model for final fields is a simple one. Set the final fields for an object in that object's constructor. Do not write a reference to the object being constructed in a place where another thread can see it before the object's constructor is finished. If this is followed, then when the object is seen by another thread, that thread will always see the correctly constructed version of that object's final fields. It will also see versions of any object or array referenced by those final fields that are at least as up-to-date as the final fields are. My question is - does the 'up-to-date' guarantee extend to the contents of nested arrays, and nested objects? An example scenario: Thread A constructs a HashMap of ArrayLists, then assigns the HashMap to final field 'myFinal' in an instance of class 'MyClass' Thread B sees a (non-synchronized) reference to the MyClass instance and reads 'myFinal', and accesses and reads the contents of one of the ArrayLists In this scenario, are the members of the ArrayList as seen by Thread B guaranteed to be at least as up to date as they were when MyClass's constructor completed?

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • Filtering string in Python

    - by Ecce_Homo
    I am making algorithm for checking the string (e-mail) - like "E-mail addres is valid" but their are rules. First part of e-mail has to be string that has 1-8 characters (can contain alphabet, numbers, underscore [ _ ]...all the parts that e-mail contains) and after @ the second part of e-mail has to have string that has 1-12 characters (also containing all legal expressions) and it has to end with top level domain .com EDIT email = raw_input ("Enter the e-mail address:") length = len (email) if length > 20 print "Address is too long" elif lenght < 5: print "Address is too short" if not email.endswith (".com"): print "Address doesn't contain correct domain ending" first_part = len (splitting[0]) second_part = len(splitting[1]) account = splitting[0] domain = splitting[1] for c in account: if c not in "abcdefghijklmopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.": print "Invalid char", "->", c,"<-", "in account name of e-mail" for c in domain: if c not in "abcdefghijklmopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.": print "Invalid char", "->", c,"<-", "in domain of e-mail" if first_part == 0: print "You need at least 1 character before the @" elif first_part> 8: print "The first part is too long" if second_part == 4: print "You need at least 1 character after the @" elif second_part> 16: print "The second part is too long" else: # if everything is fine return this print "E-mail addres is valid" EDIT: After reproting what is wrong with our input, now I need to make Python recognize valid address and return ("E-mail adress is valid") This is the best i can do with my knowledge....and we cant use regular expressions, teacher said we are going to learn them later.

    Read the article

  • WCF - Return object without serializing?

    - by Mayo
    One of my WCF functions returns an object that has a member variable of a type from another library that is beyond my control. I cannot decorate that library's classes. In fact, I cannot even use DataContractSurrogate because the library's classes have private member variables that are essential to operation (i.e. if I return the object without those private member variables, the public properties throw exceptions). If I say that interoperability for this particular method is not needed (at least until the owners of this library can revise to make their objects serializable), is it possible for me to use WCF to return this object such that it can at least be consumed by a .NET client? How do I go about doing that? Update: I am adding pseudo code below... // My code, I have control [DataContract] public class MyObject { private TheirObject theirObject; [DataMember] public int SomeNumber { get { return theirObject.SomeNumber; } // public property exposed private set { } } } // Their code, I have no control public class TheirObject { private TheirOtherObject theirOtherObject; public int SomeNumber { get { return theirOtherObject.SomeOtherProperty; } set { // ... } } } I've tried adding DataMember to my instance of their object, making it public, using a DataContractSurrogate, and even manually streaming the object. In all cases, I get some error that eventually leads back to their object not being explicitly serializable.

    Read the article

  • Jasper Reports and iReport issue

    - by William
    I am having an issue with JasperReports I can not solve. I am using Eclipse, OpenReports 3.2 and IReport 3.7 The issue I am having is that the report does nothing. When I preview the report in IReport I can at least get a "Document has no pages" message but when I try to open it using OpenReports it doesn't do anything. I get the open reports header and the copyright message but nothing between them. I was able to track it down to line 150 in ReportRunAction.java in OpenReports. That line is: jasperPrint = jasperEngine.fillReport(reportInput); At least that is the line the page dies on. It trips the catch block that the line is inside of but the error is empty. When I try to print the description it is null. I can't swear that the issue isn't that parameter. Through looking around all I have been able to find is something about how the report needs to be compiled with the same version of the jasperreports.jar that OpenReports uses. I have no idea how to tell if/what version of jasper reports is being bundled into the .jasper file though. Is that my problem? If so how do I tell/set the version of the jar that gets bundled? If not; help!

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • How can I "reset" styles for a given HTML element?

    - by Sambo
    I am working on an embeddable javascript which inserts HTML elements onto unknown pages. I have no control of the stylesheets of the pages I'll be inserting HTML into. The problem is that the HTML I insert will mistakenly stylized by the page, and I want to prevent that. What's the least verbose and/or resource intensive to go about ensuring the elements I insert are exactly as I'd like them to be? Is there an easy way to clear all styling for a given HTML element and children? In firebug, for instance, you can remove all styling. I feel like there must, and the very least should, be a native way to exempt certain HTML elements from stylesheet rules? Example: var myHTML = $("<div>my html in here</div>"); myHTML.resetAllStyles(); //<--- what would this function do? myHTML.appendTo("body"); I really want to avoid explicitly stating the attributes I want for each element I insert... PS: I have a lot of experience with JS and CSS, so you can assume that I'll understand pretty much anything you're going to tell me.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >