Search Results

Search found 9451 results on 379 pages for 'big johnson'.

Page 21/379 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Sending big file by webservice and OOM exception

    - by phenevo
    Hi, I have webservice, with method: [WebMethod] public byte[] GetFile(string FName) { System.IO.FileStream fs1 = null; fs1 = System.IO.File.Open(FName, FileMode.Open, FileAccess.Read); byte[] b1 = new byte[fs1.Length]; fs1.Read(b1, 0, (int)fs1.Length); fs1.Close(); return b1; } and it works with small file like 1mb, but when it comes to photoshop's file (about 1,5gb) I get: System.OutOfMemoryException on this line: Byte[] img = new Byte[fs.Length]; The idea is I have winforms application which get this file and saving it on local disc.

    Read the article

  • Testing a (big) collection retrieved from a db

    - by Bas
    I'm currently doing integration testing on a live database and I have the following sql statement: var date = DateTime.Parse("01-01-2010 20:30:00"); var result = datacontext.Repository<IObject>().Where(r => r.DateTime > date).First(); Assert.IsFalse(result.Finished); I need to test if the results retrieved from the statement, where the given date is less then the date of the object, have Finished set to False. I do not know how many results I get back and currently I'm getting the first object of the list and check if that object has Finished set to false. I know testing only the first item of the list is not valid testing, as a solution for that I could iterate through the list and check all items on Finished, but putting logic in a Test is kinda going against the concept of writing 'good' tests. So my question is: Does anyone have a good solution of how to properly test the results of this list?

    Read the article

  • Big problems on iPhone ad hoc build -

    - by phil swenson
    No matter what I do I can't get my ad hoc provisioning profile to work. In Organizer, I always get "A valid signing identity matching this profile cannot be found in your keychain" for my adhoc profile. I have my distribution cert installed in my login keychain. I dragged the adhoc mobileprovision file to XCode... that's pretty much all there is to it, right? I searched around and found suggestions like re-creating the cert/profile. Did that, same thing. Also make sure your login keychain is default. It is. Even tried a different computer. Same result. In XCode AdHoc target I don't have a distribution target to pick. This all used to work, but obviously I messed something up..... Perhaps my process is just wrong (it's been months since I did this). Does someone have a step by step list of how to set up for adhoc distribution?

    Read the article

  • Why is a .net generic dictionary so big

    - by thefroatgt
    I am serializing a generic dictionary in VB.net and I am very surprised that it is about 1.3kb with a single item. Am I doing something wrong, or is there something else I should be doing? I have a large number of dictionaries and it is killing me to send them all across the wire. The code I use for serialization is Dim dictionary As New Dictionary(Of Integer, Integer) Dim stream As New MemoryStream Dim bformatter As New BinaryFormatter() dictionary.Add(1, 1) bformatter.Serialize(stream, dictionary) Dim len As Long = stream.Length

    Read the article

  • Big, Thick Reference Books (PHP / MySQL / Unix) [closed]

    - by Josh K
    I'm looking for in-depth reference books / guides in PHP, MySQL, and Unix. I'm aware there other other questions pertaining to good books for short references of function names, or detailed beginner guides to these systems. I'm looking for something different. I want a book that I can either use as a quick but in-depth (decent write up, not a pocket reference guide) reference to common functions (JOINS, string manipulation, Regular Expressions, etc) while also providing a detailed inner workings explanation on the system itself.

    Read the article

  • What is the best method to start understanding BIG project source code? [closed]

    - by Mr.32
    Possible Duplicate: How do you dive into large code bases? Sometimes before developing new products we need to understand some existing products or existing source code. Sometimes to write another small module of that big project we need to understand that big source code. In our case we need to study and understand a project with lots of files and folders. What is the easiest and most comfortable way to do it ? (especially for C and C++ and under Linux)

    Read the article

  • What is best way to manage all images in a big project, inline images, background images, css sprite images?

    - by metal-gear-solid
    How do you manage all images in a big project, inline images, background images, css sprite images? Do you follow any naming convention? Do you create sub-folders to manage images? In a big project how to make it easy to find for new people in the development team if any images which they want to use (because it's in new PSD they received from designer) is already available in images folder of project and how they can find it easily.

    Read the article

  • Arbitrary-precision arithmetic Explanation

    - by TT
    I'm trying to learn C and have come across the inability to work with REALLY big numbers (i.e., 100 digits, 1000 digits, etc.). I am aware that there exist libraries to do this, but I want to attempt to implement it myself. I just want to know if anyone has or can provide a very detailed, dumbed down explanation of arbitrary-precision arithmetic. Thanks!

    Read the article

  • [PHP] - Lowering script memory usage in a "big" file creation

    - by Riccardo
    Hi there people, it looks like I'm facing a typical memory outage problem when using a PHP script. The script, originally developed by another person, serves as an XML sitemap creator, and on large websites uses quite a lot of memory. I thought that the problem was related due to an algorithm holding data in memory until the job was done, but digging into the code I have discovered that the script works in this way: open file in output (will contain XML sitemap entries) in the loop: ---- for each entry to be added in sitemap, do fwrite close file end Although there are no huge arrays or variables being kept in memory, this technique uses a lot of memory. I thought that maybe PHP was buffering under the hood the fwrites and "flushing" data at the end of the script, so I have modified the code to close and open the file every Nth record, but the memory usage is still the same.... I'm debugging the script on my computer and watching memory usage: while script execution runs, memory allocation grows. Is there a particular technique to instruct PHP to free unsed memory, to force flushing buffers if any? Thanks

    Read the article

  • More efficient left join of big table

    - by Zeus
    Hello, I have the following (simplified) query select P.peopleID, P.peopleName, ED.DataNumber from peopleTable P left outer join ( select PE.peopleID, PE.DataNumber from formElements FE inner join peopleExtra PE on PE.ElementID = FE.FormElementID where FE.FormComponentID = 42 ) ED on ED.peopleID = P.peopleID Without the sub-query this procedure takes ~7 seconds, but with it, it takes about 3minutes. Given that table peopleExtra is rather large, is there a more efficient way to do that join (short of restructuring the DB) ?

    Read the article

  • Subversion version 'difference' a big deal?

    - by CmdrTallen
    Greetings, using VS2008 and VisualSVN and seems the VisualSVN folks are religious about updating the client (and their VisualSVN server) to the latest Subversion release. My question is my subversion server is a hosted server and seems to always lag several versions behind the client I use. Should I be concerned about this version "mis-match"? Is there a general rule of thumb about when it is a point to be concerned (like an entire major release behind)? Any sort of mechanism build into either the client, the server or the protocol that prevents something horrible from happening between badly 'paired' clients and servers?

    Read the article

  • Database maintenance, big binary table optimization

    - by jgemedina
    Hi i have a huge database, around 1 TB in size, most of the space is consumed by a table which stores images, the tables has right now almost 800k rows. server response time has increased, i would like to know which techniques should i use or you recomend, partitioning? o how to reorganize the table every row is accessed by the image id column, and it has its clustered index by that column, and every two days i reorganize the index and every 7 days i rebuild it, but it seems not to be working any suggestions?

    Read the article

  • "Zoom" text to be as big as possible within constraints/box

    - by stolsvik
    First problem: You have 400 pixels width to go on, and need to fit some text within that constraint as large as possible (thus, the text shall use that amount of space). Throw in a new constraint: If the text is just "A", then it shall not zoom this above 100 pixels (or some specific font size). Then, a final situation: Linebreaks. Fit some text in the largest possible way within e.g. 400 x 150 pixels. An obvious way is to simply start with point 1, and then increase until you can't fit it anymore. This would work for all three problems, but would be very crude. The fitting of a single line within bounds could be done by writing it with some fixed point size, check the resulting pixel bounds of the text, and then simply scale it with a transform (the text scales properly too then, check out TransformUI). Any ideas of other ways to attack this would be greatly appreciated!

    Read the article

  • Center big image in smaller div

    - by larin555
    I'm trying to align images in the center of a slider div. I'm adjusting FlexSlider css by the way. Here's my CSS code : .flexslider {margin: 0; padding: 0; width: 600px; height:480px; overflow:hidden;margin-left:auto;margin-right:auto;} .flexslider .slides > li {display: none; -webkit-backface-visibility: hidden;} /* Hide the slides before the JS is loaded. Avoids image jumping */ .flexslider .slides img {width:auto;height:100%; display: inline-block; text-align:center;} Everything is working like I want, except that I want wider image to be centered in the div. Right now it is left-aligned. I cannot use background-image by the way. Any ideas? I also tried applying to the .flexslider .slides img : margin-left:-50%...not working margin-left:auto and margin-right:auto...not working left:50% and right:50%...not working either

    Read the article

  • An O(1) Sort ~~~

    - by FlySwat
    Before you stone me for being a heretic, There is a sort that proclaims to be O(1), called "Bead Sort" (http://en.wikipedia.org/wiki/Bead_sort) , however that is pure theory, when actually applied I found that it was actually O(N * M), which is pretty pathetic. That said, Lets list out some of the fastest sorts, and their best case and worst case speed in Big O notation. ~~ FlySwat ~~

    Read the article

  • I know the big picture but can't put it in place

    - by Simbilim
    Hi, I'm interested in web development and by that I mean the bigger projects like facebook or twitter. I know the basics of java, css, php and mysql. I know there is a lot more out there. I read about it. But I don't know what the purpose is and how to put in place. Things like: Scribe, thrift, casandra, Unix/Linux, shell/perl/python scripting, PostgreSQL, MongoDB, non-relational NoSQL datastores, JVM, nginx I want to know why they need it, how they use it and what te purpose is. What I need is a book like technical background of facebook for dummies or so. Are there any books or websites that explain this from scratch? Thank you!

    Read the article

  • How big teams work with database

    - by Michael Riva
    Lets say I have a team, 20 developers. And we are making a project on .net. In team every one can easy create their tables according to their modules working on it. And we think to use an ORM, can you tell me how can and which ORM tools for good to working with team. Is there any proven way? I m asking becouse I never work with a team, so I dont know the best practices. So you guys what kind of pattern you use?. I realy wonder. The team members can write their unit tests and supply necessary design patterns. What kind of approach I need to create to manage team? What kind of ORM tools that we have to use?

    Read the article

  • Search in big text log files

    - by 0xFF
    Hi, let's say you have an game server which creating text log files of gamers actions, and from time to time you need to lookup something in those logs files (like investigating an scam or loosing an item). Just for example you have 100 files and each file have size between 20MB and 50MB - How you would search them quickly? What I have already tried to do is create several threads and each invidual thread will map his own file to memory (let say memory should not be problem if it not exceed 500MB of ram) perform search here, result was something around 1 second per file : File:a26.log - read in: 0.891, lines: 625282, matches: 78848 Is there better way how to do that ? - because it seems to me kinda slow. thanks. (java was used for this case)

    Read the article

  • C# Creating thumbnail (low quality and big size problem)

    - by ile
    public void CreateThumbnail(Image img1, Photo photo, string targetDirectoryThumbs) { int newWidth = 700; int newHeight = 700; double ratio = 0; if (img1.Width > img1.Height) { ratio = img1.Width / (double)img1.Height; newHeight = (int)(newHeight / ratio); } else { ratio = img1.Height / (double)img1.Width; newWidth = (int)(newWidth / ratio); } Image bmp1 = img1.GetThumbnailImage(newWidth, newHeight, null, IntPtr.Zero); bmp1.Save(targetDirectoryThumbs + photo.PhotoID + ".jpg"); img1.Dispose(); bmp1.Dispose(); } I've put 700px so that you can have better insight of the problem. Here is original image and resized one. Any good recommendation? Thanks, Ile

    Read the article

  • power and modulo on the fly for big numbers

    - by user unknown
    I raise some basis b to the power p and take the modulo m of that. Let's assume b=55170 or 55172 and m=3043839241 (which happens to be the square of 55171). The linux-calculator bc gives the results (we need this for control): echo "p=5606;b=55171;m=b*b;((b-1)^p)%m;((b+1)^p)%m" | bc 2734550616 309288627 Now calculating 55170^5606 gives a somewhat large number, but since I have to do a modulooperation, I can circumvent the usage of BigInt, I thought, because of: (a*b) % c == ((a%c) * (b%c))%c i.e. (9*7) % 5 == ((9%5) * (7%5))%5 => 63 % 5 == (4 * 2) %5 => 3 == 8 % 5 ... and a^d = a^(b+c) = a^b * a^c, therefore I can divide b+c by 2, which gives, for even or odd ds d/2 and d-(d/2), so for 8^5 I can calculate 8^2 * 8^3. So my (defective) method, which always cut's off the divisor on the fly looks like that: def powMod (b: Long, pot: Int, mod: Long) : Long = { if (pot == 1) b % mod else { val pot2 = pot/2 val pm1 = powMod (b, pot, mod) val pm2 = powMod (b, pot-pot2, mod) (pm1 * pm2) % mod } } and feeded with some values, powMod (55170, 5606, 3043839241L) res2: Long = 1885539617 powMod (55172, 5606, 3043839241L) res4: Long = 309288627 As we can see, the second result is exactly the same as the one above, but the first one looks quiet different. I'm doing a lot of such calculations, and they seem to be accurate as long as they stay in the range of Int, but I can't see any error. Using a BigInt works as well, but is way too slow: def calc2 (n: Int, pri: Long) = { val p: BigInt = pri val p3 = p * p val p1 = (p-1).pow (n) % (p3) val p2 = (p+1).pow (n) % (p3) print ("p1: " + p1 + " p2: " + p2) } calc2 (5606, 55171) p1: 2734550616 p2: 309288627 (same result as with bc) Can somebody see the error in powMod?

    Read the article

  • How do I write a Java text file viewer for big log files

    - by Hannes de Jager
    I am working on a software product with an integrated log file viewer. Problem is, its slow and unstable for really large files because it reads the whole file into memory when you view a log file. I'm wanting to write a new log file viewer that addresses this problem. What are the best practices for writing viewers for large text files? How does editors like notepad++ and VIM acomplish this? I was thinking of using a buffered Bi-directional text stream reader together with Java's TableModel. Am I thinking along the right lines and are such stream implementations available for Java?

    Read the article

  • How to find crc32 of big files ?

    - by Arsheep
    The PHP's crc32 support string as input.And For a file , below code will work OFC. crc32(file_get_contents("myfile.CSV")); But if file goes huge (2 GB) it might raise out of memory Fatal error. So any way around to find checksum of huge files ?

    Read the article

  • Refactoring. Your way to reduce code complexity of big class with big methods

    - by Andrew Florko
    I have a legacy class that is rahter complex to maintain: class OldClass { method1(arg1, arg2) { ... 200 lines of code ... } method2(arg1) { ... 200 lines of code ... } ... method20(arg1, arg2, arg3) { ... 200 lines of code ... } } methods are huge, unstructured and repetitive (developer loved copy/paste aprroach). I want to split each method into 3-5 small functions, whith one pulic method and several helpers. What will you suggest? Several ideas come to my mind: Add several private helper methods to each method and join them in #region (straight-forward refactoring) Use Command pattern (one command class per OldClass method in a separate file). Create helper static class per method with one public method & several private helper methods. OldClass methods delegate implementation to appropriate static class (very similiar to commands). ? Thank you in advance!

    Read the article

  • Big time Leaking in Objective-C Category

    - by Daniel Amitay
    I created a custom NSString Category which lets me find all strings between two other strings. I'm now running into the problem of finding that there are a lot of kBs leaking from my script. Please see code below: #import "MyStringBetween.h" @implementation NSString (MyStringBetween) -(NSArray *)mystringBetween:(NSString *)aString and:(NSString *)bString; { NSAutoreleasePool *autoreleasepool = [[NSAutoreleasePool alloc] init]; NSArray *firstlist = [self componentsSeparatedByString:bString]; NSMutableArray *finalArray = [[NSMutableArray alloc] init]; for (int y = 0; y < firstlist.count - 1 ; y++) { NSString *firstObject = [firstlist objectAtIndex:y]; NSMutableArray *secondlist = [firstObject componentsSeparatedByString:aString]; if(secondlist.count > 1){ [finalArray addObject:[secondlist objectAtIndex:secondlist.count - 1]]; } } [autoreleasepool release]; return finalArray; } @end I admit that I'm not super good at releasing objects, but I had believed that the NSAutoreleasePool handled things for me. The line that is leaking: NSMutableArray *secondlist = [firstObject componentsSeparatedByString:aString]; Manually releasing secondlist raises an exception. Thanks in advance!

    Read the article

  • Performance improvement to a big if clause in SQL Server function

    - by Miles D
    I am maintaining a function in SQL Server 2005, that based on an integer input parameter needs to call different functions e.g. IF @rule_id = 1 -- execute function 1 ELSE IF @rule_id = 2 -- execute function 2 ELSE IF @rule_id = 3 ... etc The problem is that there are a fair few rules (about 100), and although the above is fairly readable, its performance isn't great. At the moment it's implemented as a series of IF's that do a binary-chop, which is much faster, but becomes fairly unpleasant to read and maintain. Any alternative ideas for something that performs well and is fairly maintainable?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >