Search Results

Search found 46487 results on 1860 pages for 'reading files'.

Page 13/1860 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How to focus on one topic? [closed]

    - by Brian
    I have a huge problem while reading computer books. Every couple pages I'll end up googling something I want to learn more about, but then I'll find something on that page that I'll want to learn more about and google that (sometimes programming related, sometime hardware related). Normally after wasting around 3 hours going into different subjects I'll return to the original text only to repeat the process a few pages later. Any advice for sticking to one subject and learning that in-depth? I have tons of programming books I've read half-way through since I'll become interested in other languages/topics (not that I'm not interested in the books I've started). Also, what would be worth focusing on in depth? I've gone into Python in the most depth but for classes I'm learning Java and assembly (ARM and Motorola 68000). Also, I've taken a class on C++. Lately I've been spending most of my time learning about Linux instead of programming though. I'm not sure what would be worth focusing on the most to get a job. In other words, how can you focus on one topic and not let curiosity about everything else get in the way? Thanks in advance, Brian

    Read the article

  • Using inotify-tools and ruby to push uploads to Cloud Files

    - by Christian
    Hi Guys, I wrote a few scripts to monitor an uploads directory for changes, then capture the file uploaded/changed and push it to cloud files using a ruby script. This all works well 95% of the time, the only exception is that occasionally, ruby fails with a 'file does not exist' exception. I am assuming that the ruby 'push' script is being called before the file is 100% in its new location, so the script is being called a little prematurely. I tried adding a little function to my script to check if the file exists, if it doesn't, sleep 5 then try again, but this seems to snowball and eventually dies. I then just added a sleep 2 to all calls, but it hasn't helped as I now get the 'file does not exist' error again. #!/bin/sh function checkExists { if [ ! -e "$1" ] then sleep 5 checkExists $1 fi } inotifywait -mr --timefmt '%d/%m/%y-%H:%M' --format '%T %w %f' -e modify,moved_to,create,delete /home/skylines/html/forums/uploads | while read date dir file; do cloudpath=${dir:20}${file} localpath=${dir}${file} #checkExists $localpath sleep 2 ruby /home/cbiggins/bin/pushToCloud.rb skylinesaustralia.com $cloudpath $localpath echo "${date} ruby /home/cbiggins/bin/pushToCloud.rb skylinesaustralia.com $cloudpath $localpath" >> /var/log/pushToCloud.log done I am looking for any suggestions to help me make this 100% stable (eventually, I'll serve the uploaded files from Cloud FIles, so I need to make sure its perfect) Thanks in advance!

    Read the article

  • Is there an "embedded DBMS" to support multiple writer applications (processes) on the same db files

    - by Amir Moghimi
    I need to know if there is any embedded DBMS (preferably in Java and not necessarily relational) which supports multiple writer applications (processes) on the same set of db files. BerkeleyDB supports multiple readers but just one writer. I need multiple writers and multiple readers. UPDATE: It is not a multiple connection issue. I mean I do not need multiple connections to a running DBMS application (process) to write data. I need multiple DBMS applications (processes) to commit on the same storage files. HSQLDB, H2, JavaDB (Derby) and MongoDB do not support this feature. I think that there may be some File System limitations that prohibit this. If so, is there a File System that allows multiple writers on a single file? Use Case: The use case is a high-throughput clustered system that intends to store its high-volume business log entries into a SAN storage. Storing business logs in separate files for each server does not fit because query and indexing capabilities are needed on the whole biz logs. Because "a SAN typically is its own network of storage devices that are generally not accessible through the regular network by regular devices", I want to use SAN network bandwidth for logging while cluster LAN bandwidth is being used for other server to server and client to server communications.

    Read the article

  • Parsing large delimited files with dynamic number of columns

    - by annelie
    Hi, What would be the best approach to parse a delimited file when the columns are unknown before parsing the file? The file format is Rightmove v3 (.blm), the structure looks like this: #HEADER# Version : 3 EOF : '^' EOR : '~' #DEFINITION# AGENT_REF^ADDRESS_1^POSTCODE1^MEDIA_IMAGE_00~ // can be any number of columns #DATA# agent1^the address^the postcode^an image~ agent2^the address^the postcode^^~ // the records have to have the same number of columns as specified in the definition, however they can be empty etc #END# The files can potentially be very large, the example file I have is 40Mb but they could be several hundred megabytes. Below is the code I had started on before I realised the columns were dynamic, I'm opening a filestream as I read that was the best way to handle large files. I'm not sure my idea of putting every record in a list then processing is any good though, don't know if that will work with such large files. List<string> recordList = new List<string>(); try { using (FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read)) { StreamReader file = new StreamReader(fs); string line; while ((line = file.ReadLine()) != null) { string[] records = line.Split('~'); foreach (string item in records) { if (item != String.Empty) { recordList.Add(item); } } } } } catch (FileNotFoundException ex) { Console.WriteLine(ex.Message); } foreach (string r in recordList) { Property property = new Property(); string[] fields = r.Split('^'); // can't do this as I don't know which field is the post code property.PostCode = fields[2]; // etc propertyList.Add(property); } Any ideas of how to do this better? It's C# 3.0 and .Net 3.5 if that helps. Thanks, Annelie

    Read the article

  • Split user.config into different files for faster saving (at runtime)

    - by HorstWalter
    In my c# Windows Forms application (.net 3.5 / VS 2008) I have 3 settings files resulting in one user.config file. One setting file consists of larger data, but is rarely changed. The frequently changed data are very few. However, since the saving of the settings is always writing the whole (XML) file it is always "slow". SettingsSmall.Default.Save(); // slow, even if SettingsSmall consists of little data Could I configure the settings somehow to result in two files, resulting in: SettingsSmall.Default.Save(); // should be fast SettingsBig.Default.Save(); // could be slow, is seldom saved I have seen that I can use the SecionInformation class for further customizing, however what would be the easiest approach for me? Is this possible by just changing the app.config (config.sections)? --- added information about App.config The reason why I get one file might be the configSections in the App.config. This is how it looks: <configSections <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" <section name="XY.A.Properties.Settings2Class" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" / <section name="XY.A.Properties.Settings3Class" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" / </sectionGroup </configSections I got the sections when I've added the 2nd and 3rd settings file. I have not paid any attention to this, so it was somehow the default of VS 2008. The single user.config has these 3 sections, it is absolutely transparent. Only I do not know how to tell the App.config to create three independent files instead of one. I have "played around" with the app.config above, but e.g. when I remove the config sections my applications terminates with an exception.

    Read the article

  • How to copy a directory structure but only include certain files (using windows batch files)

    - by Martin
    As the title says, how can I recursively copy a directory structure but only include some files. E.g given the following directory structure: folder1 folder2 folder3 data.zip info.txt abc.xyz folder4 folder5 data.zip somefile.exe someotherfile.dll The files data.zip and info.txt can appear everywhere in the directory structure. How can I copy the full directory structure, but only include files named data.zip and info.txt (all other files should be ignored)? The resulting directory structure should look like this: copy_of_folder1 folder2 folder3 data.zip info.txt folder4 folder5 data.zip

    Read the article

  • Enjoy Seamless Reading at Twitter in Chrome

    - by Asian Angel
    Twitter can be a lot of fun but having to constantly use the More Button to view a large number of tweets is frustrating. All that you need to be rid of that frustration is the More Tweets! extension for Google Chrome. Before Here it is…the classic “More Button”. If you are only interested in viewing a few tweets on occasion then it is not a problem. But if you are looking at a large number of tweets on a daily basis then it can be very frustrating. Notice the last tweet from TinyHacker shown here… After After installing the extension the only thing that you will need to do is refresh your Twitter page if you had it open before-hand. Now there will be a seamless connection from page to page when you are reading through tweets. You can see the TinyHacker tweet from above followed oh so nicely by tweets from the second page…this is definitely an improvement. For those who may be curious if you are quick enough with your mouse you can see what the “automated connection process” looks like. Conclusion If you are tired of constantly clicking the “More Button” and just want to read tweets without interruption then you will be very satisfied after adding this extension to your browser. Links Download the More Tweets! extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Integrate Twitter With Microsoft OutlookMake Mail.app’s Reading Pane More Like OutlookBlip.fm is a Fun Social Way to Share MusicDisable YouTube Comments while using ChromeAdd Shareaholic Goodness to Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • Slow dvd burning/reading speeds: how to solve

    - by wouter205
    I have a problem on which I'm struggling since i started using linux a year ago on my desktop, but still haven't found a solution for it. When reading or burning a dvd, the speeds are very slow (mostly under 1x) whilst I did selected the fastest speed in k3b. As such, it takes up to 40-50 minutes to burn one dvd! I read about enabling dma this post but it didn't help. This is the output for dmesg | grep -i dma > [ 0.000000] DMA 0x00000010 -> 0x00001000 [ 0.000000] DMA32 0x00001000 -> 0x00100000 [ 0.000000] DMA zone: 56 pages used for memmap [ 0.000000] DMA zone: 5 pages reserved [ 0.000000] DMA zone: 3921 pages, LIFO batch:0 [ 0.000000] DMA32 zone: 3527 pages used for memmap [ 0.000000] DMA32 zone: 254441 pages, LIFO batch:31 [ 0.000000] Policy zone: DMA32 [ 0.120356] pnp 00:01: [dma 4] [ 0.120968] pnp 00:05: [dma 2] [ 0.121421] pnp 00:06: [dma 3] [ 0.122617] pnp 00:0b: [dma 0 disabled] [ 0.852321] ata1: SATA max UDMA/133 cmd 0xec00 ctl 0xe480 bmdma 0xe000 irq 19 [ 0.852325] ata2: SATA max UDMA/133 cmd 0xe400 ctl 0xe080 bmdma 0xe008 irq 19 [ 0.861633] ata3: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xff00 irq 14 [ 0.861636] ata4: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xff08 irq 15 [ 1.329411] ata1.00: ATA-7: Maxtor 6V250F0, VA111630, max UDMA/133 [ 1.345418] ata1.00: configured for UDMA/133 [ 1.820606] ata4.00: ATAPI: PHILIPS DVDR1660P1, P1.3, max UDMA/33 [ 1.820610] ata4.00: WARNING: ATAPI DMA disabled for reliability issues. It can be enabled [ 1.820613] ata4.00: WARNING: via pata_ali.atapi_dma modparam or corresponding sysfs node. [ 1.836681] ata4.00: configured for UDMA/33 [ 12.296600] parport0: PC-style at 0x378 (0x778), irq 7, dma 3 [PCSPP,TRISTATE,COMPAT,EPP,ECP,DMA] reading the third and fourth last line, I assume there is indeed a problem with dma? edit: this question still is not solved. Could anyone come up with an other solution please? Thanks

    Read the article

  • Is Reading the Spec Enough?

    - by jozefg
    This question is centered around Scheme but really could be applied to any LISP or programming language in general. Background So I recently picked up Scheme again having toyed with it once or twice before. In order to solidify my understanding of the language, I found the Revised^5 Report on the Algorithmic Language Scheme and have been reading through that along with my compiler/interpreter's (Chicken Scheme) listed extensions/implementations. Additionally, in order to see this applied I have been actively seeking out Scheme code in open source projects and such and tried to read and understand it. This has been sufficient so far for me understanding the syntax of Scheme and I've completed almost all of the Ninety-nine Scheme problems (see here) as well as a decent number of Project Euler problems. Question While so far this hasn't been an issue and my solutions closely match those provided, am I missing out on a great part of Scheme? Or to phrase my question more generally, does reading the specification of a language along with well written code in that language sufficient to learn from? Or are other resources, books, lectures, videos, blogs, etc necessary for the learning process as well.

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • vbscript help needed [migrated]

    - by Romeo
    I am trying to move a group of files with in a group of folders named recup_dir.1 through recup_dir.535 into a single folder so that all the files will be out of the folders and just in the single folder. I know I will need to use a loop to move the files and probably concatenation to go from recup_dir.1 to recup_dir.535 but I just am not that skilled in programming please help!! I just want it to automate the copying and moving of the files rather than do it manually.

    Read the article

  • C#, Open Folder and Select multiple files

    - by Vytas999
    Hello, in C# I want to open explorer and in this explorer window must be selected some files. I do this like that: string fPath = newShabonFilePath; string arg = @"/select, "; int cnt = filePathes.Count; foreach (string s in filePathes) { if(cnt == 1) arg = arg + s; else { arg = arg + s + ","; } cnt--; } System.Diagnostics.Process.Start("explorer.exe", arg); But only the last file of "arg" is selected. How to make that all files of arg would be selected, when explorer window is opened..?

    Read the article

  • RIA Services Localization, where to place Resource Files

    - by kmacmahon
    I have the following Solution: SomeProject.Ria (non Silverlight code) SomeProject.Ria.Silverlight (Silverlight light code, namespace is still SomeProject.Ria) SomeProject.Ria.MyServices (RIA Services Domain Service) SomeProject.Ria.MyServices.Proxies (RIA Services Silverlight Generated Code) SomeProject.Shell (Silverlight Applicaiton) SomeProject.Web (Web Application) I would like to use Resource Files for my Annotations on the meta data class in SomeProject.Ria.MyServices. The format for that appears to be: [Required(AllowEmptyStrings=false,ErrorMessageResourceName="ThisFieldIsRequired", ErrorMessageResourceType(MyResource))] Which project does MyResource belong in? (Assuming that someday I need to support other culture files). Also the use of the string in here really seems to breed room for error, is it possible to do something like this and still achieve localization, or does this just get compiled into the meta data? If not, how can I get round the resource name being a string? [Required(AllowEmptyStrings=false,ErrorMessage=MyResources.RequiredMessage)]

    Read the article

  • Intellisense in header files

    - by David
    I just right now "migrated" from C# to C++/CLR. First I was annoyed, that I had to write all class' declarations twice (into .h and .cpp). Then I figured out, that I could place the code also into the h-files - it compiles at least. Well, I deleted all cpp's of my classes and now I realized, VS won't give me any Intellisense when I work on my h-files. I guess I should not place my code in the hfiles (the code won't be reused in other projects for sure), but I find it terrible to adjust all method declarations at two places... Plus I have to switch back and forth to see what modifier my method etc. and it is not nicely all in one place like in C# (with it's pros and cons). I'm sorry this is a newbie question, but I just wanted to make sure that there isn't any possibility to enable intellisense for hfiles. Or at least to learn, that I am completely on the wrong path... Thanks, David

    Read the article

  • Upload large files in .NET

    - by Austin
    I've done a good bit of research to find an upload component for .NET that I can use to upload large files, has a progress bar, and can resume the upload of large files. I've come across some components like AjaxUploader, SlickUpload, and PowUpload, to name a few. Each of these options cost money and only PowUpload does the resumable upload, but it does it with a java applet. I'm willing to pay for a component that does those things well, but if I could write it myself that would be best. I have two questions: Is it possible to resume a file upload on the client without using flash/java/Silverlight? Does anyone have some code or a link to an article that explains how to write a .NET HTTPHandler that will allow streaming upload and an ajax progress bar? Thank you, Austin [Edit] I realized I do need to be able to do resumable file uploads for my project, any suggestions for components that can do that?

    Read the article

  • files build execution order

    - by Mahesh
    Hi, I have a data structure which is as given below: class File { public string Value { get; set; } public File[] Dependencies { get; set; } public bool Change { get; private set; } public File(string value,File[] dependencies) { Value = value; Dependencies = dependencies; Change = false; } } Basically, this data structure follows a typical build execution of files. Each File has a value and a list of dependencies which is again of type File. Every file is exposed with a property called Change which tells whether the file is changed or not. I brainstormed to form a algorithm which goes through all these files and build in an order( i.e typical build process ) but haven't got a better algorithm. Can anyone throw some light on this? Thanks a lot. Mahesh

    Read the article

  • Scala script to copy files

    - by kulkarni
    I want to copy file a.txt to newDir/ from within a scala script. In java this would be done by creating 2 file streams for the 2 files, reading into buffer from a.txt and writing it to the FileOutputStream of the new file. Is there a better way to achieve this in scala? May be something in scala.tools.nsc.io._. I searched around but could not find much.

    Read the article

  • Configuration Files: how to read them into models?

    - by stacker
    I have a lot of configuration files in this format: <?xml version="1.0" encoding="utf-8"?> <XConfiguration> <XFile Name="file name 1" /> <XFile Name="name2" /> <XFile Name="name3" /> <XFile Name="name4" /> </XConfiguration> I want to use ConfigurationRepository.Get to get this object populated: public class XConfiguration { public XFile[] Files { get; set; } } I wonder what is the best way to do that. LinqToXml? I don't think ConfigurationManager is a smart option for this.

    Read the article

  • Open Folder and Select multiple files

    - by Vytas999
    In C# I want to open explorer and in this explorer window must be selected some files. I do this like that: string fPath = newShabonFilePath; string arg = @"/select, "; int cnt = filePathes.Count; foreach (string s in filePathes) { if(cnt == 1) arg = arg + s; else { arg = arg + s + ","; } cnt--; } System.Diagnostics.Process.Start("explorer.exe", arg); But only the last file of "arg" is selected. How to make that all files of arg would be selected, when explorer window is opened..?

    Read the article

  • using macro defined in header files

    - by Neeraj
    I have a macro definition in header file like this: // header.h ARRAY_SZ(a) = ((int) sizeof(a)/sizeof(a[0])); This is defined in some header file, which includes some more header files. Now, i need to use this macro in some source file that has no other reason to include header.h or any other header files included in header.h, so should i redefine the macro in my source file or simply include the header file header.h. Will the latter approach affect the code size/compile time (I think yes), or runtime (i think no)? Your advice on this!

    Read the article

  • Are large include files like iostream efficient? (C++)

    - by Keand64
    Iostream, when all of the files it includes, the files that those include, and so on and so forth, adds up to about 3000 lines. Consider the hello world program, which needs no more functionality than to print something to the screen: #include <iostream> //+3000 lines right there. int main() { std::cout << "Hello, World!"; return 0; } this should be a very simple piece of code, but iostream adds 3000+ lines to a marginal piece of code. So, are these 3000+ lines of code really needed to simply display a single line to the screen, and if not, do they create a less efficient program than if I simply copied the relevant lines into the code?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >