Search Results

Search found 13749 results on 550 pages for 'reason'.

Page 261/550 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Performance Testing Versus Unit Testing

    - by Mystagogue
    I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind: Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time. Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test). Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit). My question is: do my findings / leanings correspond with the thoughts of the community?

    Read the article

  • Audio looping in Objective-C/iPhone

    - by Neurofluxation
    So, I'm finishing up an iPhone App. I have the following code in place to play the file: while(![player isPlaying]) { totalSoundDuration = soundDuration + 0.5; //Gives a half second break between sounds sleep(totalSoundDuration); //Don't play next sound until the previous sound has finished [player play]; //Play sound NSLog(@" \n Sound Finished Playing \n"); //Output to console } For some reason, the sound plays once then the code loops and it outputs the following: Sound Finished Playing Sound Finished Playing Sound Finished Playing etc... This just repeats forever, I don't suppose any of you lovely people can fathom what could be the boggle? Cheers!

    Read the article

  • Is there an offline version of Smush.it available?

    - by Jonathon Watney
    Sometimes I use Smush.it via the YSlow Firefox plugin to non-destructively reduce the file size of JPG images. Is there an offline version available that runs on Windows? And if not is there an alternative? The reason I'd like an offline version is that I'd like to optimize images before I deploy them. Currently Smush.it accepts only public facing URLs for images or a web page (via YSlow) and can't access my internal network. That means I have to deploy, optimize, replace images and deploy again. I'd really like to deploy the optimized images on the first deploy. Update: Here's a very similar question.

    Read the article

  • Send custom headers with UIWebView loadRequest

    - by Thomas Clayson
    I want to be able to send some extra headers with my UIWebView loadRequest method. I have tried: NSMutableURLRequest *req = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:@"http://www.reliply.org/tools/requestheaders.php"]]; [req addValue:@"hello" forHTTPHeaderField:@"aHeader"]; [self.theWebView loadRequest:req]; I have also tried subclassing the UIWebView and intercepting the - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType method. In that method I had a block of code which looked like this: NSMutableURLRequest *newRequest = [request mutableCopy]; for(NSString *key in [customHeaders allKeys]) { [newRequest setValue:[customHeaders valueForKey:key] forHTTPHeaderField:key]; } [self loadRequest:newRequest]; But for some unknown reason it was causing the web view to not load anything (blank frame) and the error message NSURLErrorCancelled (-999) comes up (all known fixes don't fix it for me). So I am at a loss as to what to do. How can I send a custom header along with a UIWebView request? Many thanks!

    Read the article

  • Saving Interface Builder Changes when building in Xcode

    - by Tony Eichelberger
    I know many of you have experienced the same the scenario, where you are banging your head against the wall wondering what is wrong with your app only to find that you have forgotten to save your Interface Builder changes. Well, this never happens to me, because for some reason Xcode will prompt me to save any changes in Interface Builder whenever I build. A coworker and I are trying to figure out how to change this on his machine, with no success. I must have done something in the very early stages of my iphone development life to configure this. Does anyone know how to link IB with Xcode so that it will prompt to save changes to IB files during a build?

    Read the article

  • What makes you come back to stackoverflow every day? [closed]

    - by rmarimon
    I know this is not a programming question. Let's try to label it a programming community question so that it doesn't get closed. I've been wondering what makes the programming community so prone to helping others in stackoverflow. Is this something particular to programmers? Do you think lawyers and accountants would help other lawyers and programmers as we do? What makes you come back to stackoverflow every day? It would be great to have an answer per reason so that we can get the list of reasons. In my case, I come to stackoverflow to ask questions that I can't solve quickly, and to test how good I am when answering questions. So far I’ve failed miserably at trying to answer questions but it has helped me understand how little I know.

    Read the article

  • Visual Studio DataSet Designer keep queries

    - by LnDCobra
    In visual studio datasource designer(The screen where you have all the UML Diagrams including relations) is there any way to refresh a table and its relations/foreign key constraints without refreshing the whole table? The way I am doing it at the moment is removing the table and adding it again. This adds all the relations and refreshes all fields. Also if I change a fields data type, is there a way to automatically refresh all the fields in the datasource? Again without deleting the table and adding it again. Reason for this is because some of my TableAdapters have quite a number of complex queries attached to them and when I remove the table the adapter gets removed as well including all its queries. I am using Visual Studio 2008 and connecting to a MySQL database.

    Read the article

  • What's the significance of Oct 12 1999?

    - by Portman
    In the SignOut method of System.Web.Security.FormsAuthentication, the ASP.NET team chose to expire the FormsAuth cookie by setting the expiration date to "Oct 12 1999". HttpCookie cookie = new HttpCookie(FormsCookieName, str); cookie.HttpOnly = true; cookie.Path = _FormsCookiePath; cookie.Expires = new DateTime(0x7cf, 10, 12); What's the significance of October 12th, 1999? Is it an inside joke, or is there some valid reason to set your cookie expiration to that particular date? Edit: The theories below are interesting, but they are just guesses. Since Phil, Scott, and other members of the ASP.NET team are on StackOverflow, I thought it would be fun to offer a bounty. Hopefully someone can track down the original developer and get an authoritative answer. Awarded: To Scott Hanselman for escalating this one all the way to ScottGu. I was really hoping for some sort of super-secret, Illuminati-esque meaning, but looks like it was just the old "one year ago" trick.

    Read the article

  • Is gwt-graphics 0.9.3 compatible with gwt 2.0.3

    - by sprasad12
    Hi, I am using gwt graphics(vaadin) for one of my project, till yesterday i had gwt1.7.1 and all the drawing objects were working fine. For some reason i had to install eclipse again and so now i have gwt 2.0.3. I am observing few problems with graphics now, like the text is not getting positioned properly, if i do any changes to the code, concerned to drawing objects, it is not showing up. Therefore wanted to know whether gwt-graphics0.9.3 was compatible with gwt 2.0. Thank you.

    Read the article

  • iPhone 4.0 SDK UIWebView crashes with DOMHTMLElement error..

    - by hytgbn
    My app have an UIWebView and I open a twitter oauth page with it. when I open oauth page , it works well. after I sign-in, it redirects to another page which have PIN code. and It crashes down with logs below. Is it a bug in 4.0 SDK? 2010-06-14 22:55:11.159 AllFx[1435:2003] -[DOMHTMLElement setHref:]: unrecognized selector sent to instance 0x74e4040 2010-06-14 22:55:11.162 AllFx[1435:2003] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[DOMHTMLElement setHref:]: unrecognized selector sent to instance 0x74e4040' *** Call stack at first throw: ( 0 CoreFoundation 0x02b6c919 __exceptionPreprocess + 185 1 libobjc.A.dylib 0x02cba5de objc_exception_throw + 47 2 CoreFoundation 0x02b6e42b -[NSObject(NSObject) doesNotRecognizeSelector:] + 187 3 CoreFoundation 0x02ade116 ___forwarding___ + 966 4 CoreFoundation 0x02addcd2 _CF_forwarding_prep_0 + 50 5 DataDetectorsUI 0x0bde8ac4 -[WebTextIterator(DDExtensions) dd_doUrlificationForQuery:forResults:document:DOMWasModified:URLificationBlock:] + 1731 6 DataDetectorsUI 0x0bde2f09 -[DDOperation _doURLificationOnDocument] + 341 7 DataDetectorsUI 0x0bddff9c -[DDDetectionController _doURLificationOnWebThreadAndRelease:] + 563 8 CoreFoundation 0x02add42d __invoking___ + 29 9 CoreFoundation 0x02add301 -[NSInvocation invoke] + 145 10 WebCore 0x039fa2b3 _ZL15HandleAPISourcePv + 147 11 CoreFoundation 0x02b4dd7f __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 15 12 CoreFoundation 0x02aac2cb __CFRunLoopDoSources0 + 571 13 CoreFoundation 0x02aab7c6 __CFRunLoopRun + 470 14 CoreFoundation 0x02aab280 CFRunLoopRunSpecific + 208 15 CoreFoundation 0x02aab1a1 CFRunLoopRunInMode + 97 16 WebCore 0x039943c3 _ZL12RunWebThreadPv + 483 17 libSystem.B.dylib 0x98552a19 _pthread_start + 345 18 libSystem.B.dylib 0x9855289e thread_start + 34 ) terminate called after throwing an instance of 'NSException'

    Read the article

  • Is using ReaderWriterLockSlim a bad idea for long lived objects?

    - by uriDium
    I am trying to track down the reason that an application has periods of bad performance. I think that I have linked the bad performance to the points where Garbage Collection is run for Gen 2. I get a profiling tool (CLR Profiler) and was quite surprised by the results. In my test I was spawning and processing millions of objects. However the biggest hog of the Gen 2 space comes from something Called Threading.ReaderWriterCount which comes from System.Threading.ReaderWriterLockSlim::InitializeThreadCounts. I know nothing about the inner workings of ReaderWriterLockSlim but from what I am getting from the reports it is okay to have 1 or 2 Locks for longer lived objects but try and use other locks if you are going to have many smaller objects. Does anyone have any comments or experience with ReaderWriterLockSlim and/or what to look for if it seems that GC is killing application performance?

    Read the article

  • Funniest code names for software projects

    - by furtelwart
    Developers are creative. Not as they create wonderfull GUIs or proof their sense for art with good color combinations, but with code names. Every project has a code name, sometimes official, sometimes private (with a good reason!). Here are my favourites: Android: 1.6 = Donut 2.0 = Eclaire (picture of Google's eclaire) grml (Live distribution based on Debian GNU/Linux, comes from Austria therefore in German) Hustenstopper (cough stopper) Eierspass (egg fun) Meilenschwein (mile pig, it's a pun with milestone) Lackdose-Allergie (lacquer can allergy, it's a pun with lactose allergy) Hello-Wien (pun with Halloween, Wien being German for Vienna) I really like to see the funniest code names you ever heard of. Aren't there any more funny project names?

    Read the article

  • RegEx: Split String at Capitalized Letters and Non-capitalized letters to Create Small Cap Fonts

    - by Otaku
    So i've purposefully stayed away from RegEx as just looking at it kills me...ugh. But now I need it and could really use some help to do this in .NET (C# or VB.NET). I need to split a string based on capitalization or lack thereof. For example: I'm not upPercase "I" "'m not up" "P" "ercase" or FBI Agent Winters "FBI A" "gent " "W" "inters" The reason I'm doing this is to manually create small caps, in which non-capitalized strings will be sent to uppercase and their font size made 80% of the original font size. Appreciate any help that could be provided here.

    Read the article

  • Use absolute path for easier modify include path in future?

    - by i need help
    config.php put at the root level, this file will be included in any pages. Then at config.php <?php define( 'ROOT_DIR', dirname(__FILE__) ); ?> So at all other pages from different sub/a.php , sub/sub/b.php directories, when I want to include a specific file in specific location, I just need to include( ROOT_DIR.'/include/functions.php' ); In windows server, the ROOT_DIR bring the value to C:/inetpub/vhosts/domain.com Is this a good/secure way? It seems like via this way, when I move the b.php to other upper level folder, I don't need to do any changes to the include file path, which is good for maintenance. Any cons? Like SEO wise, or any other reason... What you guys think.

    Read the article

  • asp.net mvc json 2 times post to the controller

    - by mazhar kaunain baig
    function onTestComplete(content) { var url = '<%= Url.Action("JsonTest","Organization") %>'; $.post(url, null, function(data) { alert(data["name"]); alert(data["ee"]); }); } <% using (Ajax.BeginForm("JsonTest", new AjaxOptions() { HttpMethod = "POST", OnComplete = "onTestComplete" })) { %> <%= Html.TextBox("name") %><br /> <input type="submit" /> <% } % controller:` [HttpPost] public ActionResult JsonTest() { var data = new { name = "TestName",ee="aaa" }; return Json(data); }` Due to some reason When I click on the button (My Break point is in the controller jsontest method) The jsontest is called twice(that's the real problem).I want to call it once as usual,using Ajax.BeginForm( "", new AjaxOptions { HttpMethod = "POST", OnComplete = "onTestComplete" })) I am able to call it once but it doesn't post the values to the controller.

    Read the article

  • Web site aggregation with twitter widget SSL issue

    - by AB
    Hello! I'm seeking for solution how to isolate widget included by partial to main site. Issue appear when user access site with https. Ie 6,7 shows security confirmation dialog (part of website resources are not in secure zone). First of all I download twitter widget on our side, also I download all CSS and pictures. Then I patched widget JS to point onto downloaded resources. But still has not luck with security warning :( I guess the reason of this issue is AJAX request to twitter, but there is no idea how to sole it. (Just to create some kind of proxy on our side). Thank you for attention.

    Read the article

  • PHP text output in clauses

    - by arik-so
    Hello. I am working on a PHP project. There, I often use following syntax to output text in a cluase: if($boolean){ ?> output text <? }else{ ?> alternative <? } On my computer, this works perfectly well. I use XAMPP foer Mac OS X. But when I send the files to my coworker, these outputs often do not work and the compiler complains about having reached an unexpected $end of file. This occurs especially often when there is a tag in the output. We have to replace the means of output with echo. What's the reason for this strange behavior of the compiler? Is the above-mention syntax of outputting text wrong?

    Read the article

  • Converting time to Military

    - by Chris
    Maybe I'm seeing things.. In an attempt to turn a date with a format of "mm/dd/yyyy hh:mm:ss PM" into military time, the following replacement of a row value does not seem to take. Even though I am sure I have done this before (with column values other than dates). Is there some reason that row["adate"] would not accept a value assigned to it in this case? DateTime oos = DateTime.Parse(row["adate"].ToString()); row["adate"] = oos.Month.ToString() + "/" + oos.Day.ToString() + "/" + oos.Year.ToString() + " " + oos.Hour.ToString() + ":" + oos.Minute.ToString();

    Read the article

  • struts2 StringLengthFieldValidator annotation not working for empty string

    - by dcp
    Let's say I have this annotation for a struts2 validation: @StringLengthFieldValidator(key = "key14", fieldName = "poNumber", minLength = "1", maxLength = "255", message = "poNumber must be between 1 and 255 characters.") public void setPoNumber(String poNumber) { this.poNumber = poNumber; } The behavior I'm seeing is that if I pass a string that is empty to this setter, (ex. setPoNumber("")) the validator doesn't catch the error. Strings that are over 255 are caught fine. Equally strange is if I change minLength to 2 and pass a string of length 1, it will catch the error as well. But empty string does not seem to be caught when minLength = "1". For this reason, I cannot use this validator. I just wondered if I'm doing something wrong. I'm using struts 2.1.8.1. Thanks for any advice.

    Read the article

  • UITableView titleForHeaderInSection prints headers to console twice then crashes

    - by joec
    In the method below titleForHeaderInSection, for some reason the NSLog prints out the headers twice and then the app crashes in objc_msgSend. I can't understand why this would cause the app to crash? It would seem from research that crashes in objc_msgSend are caused by sending messages to already freed objects, but is that the case here? My sectionNames array is populated in viewDidLoad. - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { NSString *title = nil; title=[sectionNames objectAtIndex:section]; NSLog(title); return title; } Thanks

    Read the article

  • How do I install both D2007 and D2010?

    - by DoctorBean
    From what I have gathered, one can have both editions of Delphi installed. My concern is that default paths, etc, may get confused especially when installing 3rd party components. The reason why I want to do this is I have some 3rd party components which have not been updated. Although I have the source files, I'm not knowledgeable enough to update them. I have tried compiling it for D2010 and received so many errors that it would be easier to install and use it in D2007. I'm running Windows 7. Thanks in advance.

    Read the article

  • Why does Java read its default settings from the system

    - by Bozho
    Java is reading the locale, timezone and encoding information (and perhaps more) from the system it is installed on. This often brings bad surprises (brought me one just yesterday). Say your development and production servers are set to have TimeZone GMT+2. Then you deploy on a production server set to GMT. a 2-hour shift may not be easy to observe immediately. And although you can pass a TimeZone to your calendars, APIs might be instantiating calendars (or dates) using the default timezone. Now, I know one should be careful with these settings, but are easy to miss, hence make programs more error-prone. So, why doesn't Java have its own defaults - UTF-8, GMT, en_US (yes, I'm on non-en_US locale, but having it as default is fine). Applications could read the system settings via some API, if needed. Thus programs would be more predictable. So, what is the reason behind this decision?

    Read the article

  • TortoiseGit - representing branches in a tree - visual issue

    - by richard
    This is a little hard to explain with text, but I'll do my best, and you try to keep up; if something isn't clear at first, don't hesitate to ask, and I'll try to clarify. When TortoiseGit has one branch it looks approximately like this: o | o <-- a commit sign | x When I split my work into a new branch, it looks like this: o | o--o | x when I split, from the master to another new branch it looks like this: o--o | o--o | x Is there a way for every new branch that I make, and work on, to have its own "line" ... what I mean: o-----o | o--o | x so they don't "vertically overlap". So that every branch, has its own vertical line I can follow (for some reason this looks rather confusing to me, the way it's done now). Do any other Git clients for Windows do this differently ?

    Read the article

  • Drawbacks of WebFormsMVP?

    - by Tony_Henrich
    While ASP.NET MVC seems to be a viable technology praised by a lot of developers, I can't seem to find enough reasons to devote my energy and time for it. The main reason is that I don't find enough .NET jobs asking for it. Companies still use WebForms and it works just fine for them. I am not self employed where I get to choose the technology I like. I would rather use my time improve my skills in SilverLight, JQuery, Javascript, SQL, LINQ.. etc. Even Photoshop! So I got interested in webformsmvp.com. I get to still use WebForms and use better testing methods. Anyone who has experience with it can tell me what they didn't like about it?

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >