Search Results

Search found 21 results on 1 pages for 'logarithm'.

Page 1/1 | 1 

  • How to find a binary logarithm very fast? (O(1) at best)

    - by psihodelia
    Is there any very fast method to find a binary logarithm of an integer number? For example, given a number x=52656145834278593348959013841835216159447547700274555627155488768 such algorithm must find y=log(x,2) which is 215. x is always a power of 2. The problem seems to be really simple. All what is required is to find the position of the most significant 1 bit. There is a well-known method FloorLog, but it is not very fast especially for the very long multi-words integers. What is the fastest method?

    Read the article

  • Inaccurate Logarithm in Python

    - by Avihu Turzion
    I work daily with Python 2.4 at my company. I used the versatile logarithm function 'log' from the standard math library, and when I entered log(2**31, 2) it returned 31.000000000000004, which struck me as a bit odd. I did the same thing with other powers of 2, and it worked perfectly. I ran 'log10(2**31) / log10(2)' and I got a round 31.0 I tried running the same original function in Python 3.0.1, assuming that it was fixed in a more advanced version. Why does this happen? Is it possible that there are some inaccuracies in mathematical functions in Python?

    Read the article

  • Efficient implementation of natural logarithm (ln) and exponentiation

    - by Donotalo
    Basically, I'm looking for implementation of log() and exp() functions provided in C library <math.h>. I'm working with 8 bit microcontrollers (OKI 411 and 431). I need to calculate Mean Kinetic Temperature. The requirement is that we should be able to calculate MKT as fast as possible and with as little code memory as possible. The compiler comes with log() and exp() functions in <math.h>. But calling either function and linking with the library causes the code size to increase by 5 Kilobytes, which will not fit in one of the micro we work with (OKI 411), because our code already consumed ~12K of available ~15K code memory. The implementation I'm looking for should not use any other C library functions (like pow(), sqrt() etc). This is because all library functions are packed in one library and even if one function is called, the linker will bring whole 5K library to code memory.

    Read the article

  • Implementing Operator Overloading with Logarithms in C++

    - by Jacob Relkin
    Hello my friends, I'm having some issues with implementing a logarithm class with operator overloading in C++. My first goal is how I would implement the changeBase method, I've been having a tough time wrapping my head around it. My secoond goal is to be able to perform an operation where the left operand is a double and the right operand is a logarithm object. Here's a snippet of my log class: // coefficient: double // base: unsigned int // number: double class _log { double coefficient, number; unsigned int base; public: _log() { base = rand(); coefficient = rand(); number = rand(); } _log operator+ ( const double b ) const; _log operator* ( const double b ) const; _log operator- ( const double b ) const; _log operator/ ( const double b ) const; _log operator<< ( const _log &b ); double getValue() const; bool changeBase( unsigned int base ); }; You guys are awesome, thank you for your time.

    Read the article

  • What is O(n log n) or O(n log(log n))

    - by Mark Tomlin
    What does O, if indeed it is a Oh (As in the letter O) not the number Zero (0) mean? I think the n would be number, but I'm not sure as I'm not a 'real' computer programmer, just a hobbyist. And log would be logarithmic function, but I only know that because of smarter people then I have told me this, while never really explaining what a logarithm is. So please, in plain English, explain what this is, and the differences between the two (such as their applications.

    Read the article

  • What does O(log n) mean exactly?

    - by Andreas Grech
    I am currently learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials. For example, the following function is O(n) because the algorithm grows in proportion to its input n: f(int n) { int i; for (i = 0; i < n; ++i) printf("%d", i); } Similarly, if there was a nested loop, the time would be O(n2). But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)? I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

    Read the article

  • Pruning data for better viewing on loglog graph - Matlab

    - by Geodesic
    Hi Guys, just wondering if anyone has any ideas about an issue I'm having. I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge ( 1e3) it's really difficult to see what data is what. There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution. What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one? My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades): //%grab indicies per decade ind12 = find(y >= 1e1 & y <= 1e2); indlow = find(y < 1e2); indhigh = find(y > 1e4); ind23 = find(y >+ 1e2 & y <= 1e3); ind34 = find(y >+ 1e3 & y <= 1e4); //%We want ind12 indexes in this decade, find spacing tot23 = round(length(ind23)/length(ind12)); tot34 = round(length(ind34)/length(ind12)); //%grab ones to keep ind23keep = ind23(1):tot23:ind23(end); ind34keep = ind34(1):tot34:ind34(end); indnew = [indlow' ind23keep ind34keep indhigh']; loglog(x(indnew), y(indnew)); But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale. Any ideas on how I can do this?

    Read the article

  • Difference between Logarithmic and Uniform cost criteria

    - by Marthin
    I'v got some problem to understand the difference between Logarithmic(Lcc) and Uniform(Ucc) cost criteria and also how to use it in calculations. Could someone please explain the difference between the two and perhaps show how to calculate the complexity for a problem like A+B*C (Yes this is part of an assignment =) ) Thx for any help! /Marthin

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • Extreme Optimization –Mathematical Constants and Basic Functions

    - by JoshReuben
    Machine constants The MachineConstants class - contains constants for floating-point arithmetic because the CLS System.Single and Double floating-point types do not follow the standard conventions and are useless. machine constants for the Double type: machine precision: Epsilon , SqrtEpsilon CubeRootEpsilon largest possible value: MaxDouble , SqrtMaxDouble, LogMaxDouble smallest Double-precision floating point number that is greater than zero: MinDouble , SqrtMinDouble , LogMinDouble A similar set of constants is available for the Single Datatype  Mathematical Constants The Constants class contains static fields for many mathematical constants and common expressions involving small integers – if you are doing thousands of iterations, you wouldn't want to calculate OneOverSqrtTwoPi , Sqrt17 or Log17 !!! Fundamental constants E - The base for the natural logarithm, e (2.718...). EulersConstant - (0.577...). GoldenRatio - (1.618...). Pi - the ratio between the circumference and the diameter of a circle (3.1415...). Expressions involving fundamental constants: TwoPi, PiOverTwo, PiOverFour, LogTwoPi, PiSquared, SqrPi, SqrtTwoPi, OneOverSqrtPi, OneOverSqrtTwoPi Square roots of small integers: Sqrt2, Sqrt3, Sqrt5, Sqrt7, Sqrt17 Logarithms of small integers: Log2, Log3, Log10, Log17, InvLog10  Elementary Functions The IterativeAlgorithm<T> class in the Extreme.Mathematics namespace defines many elementary functions that are missing from System.Math. Hyperbolic Trig Functions: Cosh, Coth, Csch, Sinh, Sech, Tanh Inverse Hyperbolic Trig Functions: Acosh, Acoth, Acsch, Asinh, Asech, Atanh Exponential, Logarithmic and Miscellaneous Functions: ExpMinus1 - The exponential function minus one, ex-1. Hypot - The hypotenuse of a right-angled triangle with specified sides. LambertW - Lambert's W function, the (real) solution W of x=WeW. Log1PlusX - The natural logarithm of 1+x. Pow - A number raised to an integer power.

    Read the article

  • Encode complex number as RGB pixel

    - by Vi
    How is it better to encode a complex number into RGB pixel and vice versa? Probably (logarithm of) an absolute value goes to brightness and an argument goes to hue. Desaturated pixes should receive randomized argument in reverse transformation. Something like: 0 - (0,0,0) 1 - (255,0,0) -1 - (0,255,255) 0.5 - (128,0,0) i - (255,255,0) -i - (255,0,255) (0,0,0) - 0 (255,255,255) - e^(i * random) (128,128,128) - 0.5 * e^(i *random) (0,128,128) - -0.5 Are there ready-made formulas for that?

    Read the article

  • Is it OK to put a standard, pure C header #include directive inside a namespace?

    - by mic_e
    I've got a project with a class log in the global namespace (::log). So, naturally, after #include <cmath>, the compiler gives an error message each time I try to instantiate an object of my log class, because <cmath> pollutes the global namespace with lots of three-letter methods, one of them being the logarithm function log(). So there are three possible solutions, each having their unique ugly side-effects. Move the log class to it's own namespace and always access it with it's fully qualified name. I really want to avoid this because the logger should be as convenient as possible to use. Write a mathwrapper.cpp file which is the only file in the project that includes <cmath>, and makes all the required <cmath> functions available through wrappers in a namespace math. I don't want to use this approach because I have to write a wrapper for every single required math function, and it would add additional call penalty (cancelled out partially by the -flto compiler flag) The solution I'm currently considering: Replace #include <cmath> by namespace math { #include "math.h" } and then calculating the logarithm function via math::log(). I have tried it out and it does, indeed, compile, link and run as expected. It does, however, have multiple downsides: It's (obviously) impossible to use <cmath>, because the <cmath> code accesses the functions by their fully qualified names, and it's deprecated to use in C++. I've got a really, really bad feeling about it, like I'm gonna get attacked and eaten alive by raptors. So my question is: Is there any recommendation/convention/etc that forbid putting include directives in namespaces? Could anything go wrong with diferent C standard library implementations (I use glibc), different compilers (I use g++ 4.7, -std=c++11), linking? Have you ever tried doing this? Are there any alternate ways to banish the math functions from the global namespace? I've found several similar questions on stackoverflow, but most were about including other C++ headers, which obviously is a bad idea, and those that weren't made contradictory statements about linking behaviour for C libraries. Also, would it be beneficial to additionally put the #include <math.h> inside extern "C" {}?

    Read the article

  • USB transfer speed "logarithmically" decreases. Why and can it be improved?

    - by starship
    I have an external hard drive. Just today I tried to copy a bigger file (it was film of ~230 MB) and at first it rushed up until ~70%. There is started decreasing. At first it started at around 56 MB/s Then it rapidly dropped to 23 MB/s (File transfer was 70% complete) Then it slowly started decreasing until it was around 2 MB/s (File was ~90% complete) When it finished the transfer it was slightly above 1.5 MB/s. To describe it graphically: If you drew a curve of the decrease it would probably resemble the graph of a logarithm function So, what I'm really asking is: "Why does this happen?" and "Is there a way around it?" Thank you!

    Read the article

  • Encode complex number as RGB pixel and back

    - by Vi
    How is it better to encode a complex number into RGB pixel and vice versa? Probably (logarithm of) an absolute value goes to brightness and an argument goes to hue. Desaturated pixes should receive randomized argument in reverse transformation. Something like: 0 - (0,0,0) 1 - (255,0,0) -1 - (0,255,255) 0.5 - (128,0,0) i - (255,255,0) -i - (255,0,255) (0,0,0) - 0 (255,255,255) - e^(i * random) (128,128,128) - 0.5 * e^(i *random) (0,128,128) - -0.5 Are there ready-made formulas for that? Edit: Looks like I just need to convert RGB to HSB and back. Edit 2: Existing RGB - HSV converter fragment: if (hsv.sat == 0) { hsv.hue = 0; // ! return hsv; } I don't want 0. I want random. And not just if hsv.sat==0, but if it is lower that it should be ("should be" means maximum saturation, saturation that is after transformation from complex number).

    Read the article

  • SQL SERVER – Fix : Error 3623 – An invalid floating point operation occurred

    - by pinaldave
    Going back in time, I always had a problem with mathematics. It was a great subject and I loved it a lot but I only mastered it after practices a lot. I learned that mathematics problems should be addressed systematically and being verbose is not a trick, I learned to solve any problem. Recently one of reader sent me an email with the title “Mathematics problem – please help!” and I was a bit scared. I was good at mathematics but not the best. When I opened the email I was relieved as it was Mathematics problem with SQL Server. My friend received following error while working with SQL Server. Msg 3623, Level 16, State 1, Line 1 An invalid floating point operation occurred. The reasons for the error is simply that invalid usage of the mathematical function is attempted. Let me give you a few examples of the same. SELECT SQRT(-5); SELECT ACOS(-3); SELECT LOG(-9); If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). When I send above reply to my friend he did understand that he was passing incorrect value to the function. As mentioned earlier the only way to fix this issue is finding incorrect value and avoid passing it to the function. Every mathematics function is different and there is not a single solution to identify erroneous value passed. If you are facing this error and not able to figure out the solution. Post a comment and I will do my best to figure out the solution. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Finding text orientation in image (angle for rotation)

    - by maximus
    There is an image captured by camera, and I need to find the angle of the text in order to rotate it to make the image better for OCR results. So I know that the fourier transform can be used for that purpose, My question is, does it really gives good results or may be it is better to use something different than that? Can you tell me if there is a good method for this purpose? I am afraid that not every image containing the text will give me a good result after using fourier transform method. Actually, if I make like it is written here: link text (see the part related with an example of text image) calculating the logarithm of the magnitude of the Fourier transform of image with text and then thresholding it, I get that points and I can calculate the line approximately passing through them, and after getting the line calculate the angle, and then make an affine transform, But, what if I do not get a good result every time using this method , and make a false transform? Any ideas please to judge wether the result is correct or not, or may be another method is better? The binary image can contain noise, even if there are not so much of them, the angle found as a result can be not accurate.

    Read the article

  • Using JavaCC to infer semantics from a Composite tree

    - by Skice
    Hi all, I am programming (in Java) a very limited symbolic calculus library that manages polynomials, exponentials and expolinomials (sums of elements like "x^n * e^(c x)"). I want the library to be extensible in the sense of new analytic forms (trigonometric, etc.) or new kinds of operations (logarithm, domain transformations, etc.), so a Composite pattern that represent the syntactic structure of an expression, together with a bunch of Visitors for the operations, does the job quite well. My problem arise when I try to implement operations that depends on the semantics more than on the syntax of the Expression (like integrals, for instance: there are a lot of resolution methods for specific classes of functions, but these same classes can be represented with more than a single syntax). So I thought I need something to "parse" the Composite tree to infer its semantics in order to invoke the right integration method (if any). Someone pointed me to JavaCC, but all the examples I've seen deal only with string parsing; so, I don't know if I'm digging in the right direction. Some suggestions? (I hope to have been clear enough!)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #035

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Row Overflow Data Explanation  In SQL Server 2005 one table row can contain more than one varchar(8000) fields. One more thing, the exclusions has exclusions also the limit of each individual column max width of 8000 bytes does not apply to varchar(max), nvarchar(max), varbinary(max), text, image or xml data type columns. Comparison Index Fragmentation, Index De-Fragmentation, Index Rebuild – SQL SERVER 2000 and SQL SERVER 2005 An old but like a gold article. Talks about lots of concepts related to Index and the difference from earlier version to the newer version. I strongly suggest that everyone should read this article just to understand how SQL Server has moved forward with the technology. Improvements in TempDB SQL Server 2005 had come up with quite a lots of improvements and this blog post describes them and explains the same. If you ask me what is my the most favorite article from early career. I must point out to this article as when I wrote this one I personally have learned a lot of new things. Recompile All The Stored Procedure on Specific TableI prefer to recompile all the stored procedure on the table, which has faced mass insert or update. sp_recompiles marks stored procedures to recompile when they execute next time. This blog post explains the same with the help of a script.  2008 SQLAuthority Download – SQL Server Cheatsheet You can download and print this cheat sheet and use it for your personal reference. If you have any suggestions, please let me know and I will see if I can update this SQL Server cheat sheet. Difference Between DBMS and RDBMS What is the difference between DBMS and RDBMS? DBMS – Data Base Management System RDBMS – Relational Data Base Management System or Relational DBMS High Availability – Hot Add Memory Hot Add CPU and Hot Add Memory are extremely interesting features of the SQL Server, however, personally I have not witness them heavily used. These features also have few restriction as well. I blogged about them in detail. 2009 Delete Duplicate Rows I have demonstrated in this blog post how one can identify and delete duplicate rows. Interesting Observation of Logon Trigger On All Servers – Solution The question I put forth in my previous article was – In single login why the trigger fires multiple times; it should be fired only once. I received numerous answers in thread as well as in my MVP private news group. Now, let us discuss the answer for the same. The answer is – It happens because multiple SQL Server services are running as well as intellisense is turned on. Blog post demonstrates how we can do the same with the help of SQL scripts. Management Studio New Features I have selected my favorite 5 features and blogged about it. IntelliSense for Query Editing Multi Server Query Query Editor Regions Object Explorer Enhancements Activity Monitors Maximum Number of Index per Table One of the questions I asked in my user group was – What is the maximum number of Index per table? I received lots of answers to this question but only two answers are correct. Let us now take a look at them in this blog post. 2010 Default Statistics on Column – Automatic Statistics on Column The truth is, Statistics can be in a table even though there is no Index in it. If you have the auto- create and/or auto-update Statistics feature turned on for SQL Server database, Statistics will be automatically created on the Column based on a few conditions. Please read my previously posted article, SQL SERVER – When are Statistics Updated – What triggers Statistics to Update, for the specific conditions when Statistics is updated. 2011 T-SQL Scripts to Find Maximum between Two Numbers In this blog post there are two different scripts listed which demonstrates way to find the maximum number between two numbers. I need your help, which one of the script do you think is the most accurate way to find maximum number? Find Details for Statistics of Whole Database – DMV – T-SQL Script I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. 2012 Introduction to Function SIGN SIGN Function is very fundamental function. It will return the value 1, -1 or 0. If your value is negative it will return you negative -1 and if it is positive it will return you positive +1. Let us start with a simple small example. Template Browser – A Very Important and Useful Feature of SSMS Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. An invalid floating point operation occurred If you run any of the above functions they will give you an error related to invalid floating point. Honestly there is no workaround except passing the function appropriate values. SQRT of a negative number will give you result in real numbers which is not supported at this point of time as well LOG of a negative number is not possible (because logarithm is the inverse function of an exponential function and the exponential function is NEVER negative). Validating Spatial Object with IsValidDetailed Function SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words, this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The best cross platform (portable) arbitrary precision math library

    - by Siu Ching Pong - Asuka Kenji
    Dear ninjas / hackers / wizards, I'm looking for a good arbitrary precision math library in C or C++. Could you please give me some advices / suggestions? The primary requirements: It MUST handle arbitrarily big integers (my primary interest is on integers). In case that you don't know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000). The precision MUST NOT NEED to be specified during library initialization / object creation. The precision should ONLY be constrained by the available resources of the system. It SHOULD utilize the full power of the platform, and should handle "small" numbers natively. That means on a 64-bit platform, calculating 2^33 + 2^32 should use the available 64-bit CPU instructions. The library SHOULD NOT calculate this in the same way as it does with 2^66 + 2^65 on the same platform. It MUST handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), gcd(), factorial(), and other common integer arithmetic calculations efficiently. Ability to handle functions like sqrt() (square root), log() (logarithm) that do not produce integer results is a plus. Ability to handle symbolic computations is even better. Here are what I found so far: Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don't understand the math underneath. It may be based on theories / algorithms that I have never learnt. The built-in integer type or in core libraries of bc / Python / Ruby / Haskell / Lisp / Erlang / OCaml / PHP / some other languages: I have ever used some of these, but I have no idea on which library they are using, or which kind of implementation they are using. What I have already known: Using a char as a decimal digit, and a char* as a decimal string and do calculations on the digits using a for-loop. Using an int (or a long int, or a long long) as a basic "unit" and an array of it as an arbitrary long integer, and do calculations on the elements using a for-loop. Booth's multiplication algorithm What I don't know: Printing the binary array mentioned above in decimal without using naive methods. Example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, ... (2) use a char* string mentioned above to store the intermediate decimal results). What I appreciate: Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion). Good suggestions on books / articles that I should read. For example, an illustration with figures on how a un-naive arbitrarily long binary to decimal conversion algorithm works is good. Any help. Please DO NOT answer this question if: you think using a double (or a long double, or a long long double) can solve this problem easily. If you do think so, it means that you don't understand the issue under discussion. you have no experience on arbitrary precision mathematics. Thank you in advance! Asuka Kenji

    Read the article

  • How do I maximize code coverage?

    - by naivedeveloper
    Hey all, the following is a snippet of code taken from the unix ptx utility. I'm attempting to maximize code coverage on this utility, but I am unable to reach the indicated portion of code. Admittedly, I'm not as strong in my C skills as I used to be. The portion of code is indicated with comments, but it is towards the bottom of the block. if (used_length == allocated_length) { allocated_length += (1 << SWALLOW_REALLOC_LOG); block->start = (char *) xrealloc (block->start, allocated_length); } Any help interpreting the indicated portion in order to cover that block would be greatly appreciated. /* Reallocation step when swallowing non regular files. The value is not the actual reallocation step, but its base two logarithm. */ #define SWALLOW_REALLOC_LOG 12 static void swallow_file_in_memory (const char *file_name, BLOCK *block) { int file_handle; /* file descriptor number */ struct stat stat_block; /* stat block for file */ size_t allocated_length; /* allocated length of memory buffer */ size_t used_length; /* used length in memory buffer */ int read_length; /* number of character gotten on last read */ /* As special cases, a file name which is NULL or "-" indicates standard input, which is already opened. In all other cases, open the file from its name. */ bool using_stdin = !file_name || !*file_name || strcmp (file_name, "-") == 0; if (using_stdin) file_handle = STDIN_FILENO; else if ((file_handle = open (file_name, O_RDONLY)) < 0) error (EXIT_FAILURE, errno, "%s", file_name); /* If the file is a plain, regular file, allocate the memory buffer all at once and swallow the file in one blow. In other cases, read the file repeatedly in smaller chunks until we have it all, reallocating memory once in a while, as we go. */ if (fstat (file_handle, &stat_block) < 0) error (EXIT_FAILURE, errno, "%s", file_name); if (S_ISREG (stat_block.st_mode)) { size_t in_memory_size; block->start = (char *) xmalloc ((size_t) stat_block.st_size); if ((in_memory_size = read (file_handle, block->start, (size_t) stat_block.st_size)) != stat_block.st_size) { error (EXIT_FAILURE, errno, "%s", file_name); } block->end = block->start + in_memory_size; } else { block->start = (char *) xmalloc ((size_t) 1 << SWALLOW_REALLOC_LOG); used_length = 0; allocated_length = (1 << SWALLOW_REALLOC_LOG); while (read_length = read (file_handle, block->start + used_length, allocated_length - used_length), read_length > 0) { used_length += read_length; /* Cannot cover from this point...*/ if (used_length == allocated_length) { allocated_length += (1 << SWALLOW_REALLOC_LOG); block->start = (char *) xrealloc (block->start, allocated_length); } /* ...to this point. */ } if (read_length < 0) error (EXIT_FAILURE, errno, "%s", file_name); block->end = block->start + used_length; } /* Close the file, but only if it was not the standard input. */ if (! using_stdin && close (file_handle) != 0) error (EXIT_FAILURE, errno, "%s", file_name); }

    Read the article

1