Search Results

Search found 81493 results on 3260 pages for 'file size'.

Page 99/3260 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • jQuery: Finding file size and adding it to the link

    - by Ricardo
    Let me start by saying that I'm not a jQuery guru by any means and I genuinely know this is over my head, that's why I've come to SO. Is there a way with jQuery to find the file size of a link on a page and then inject/add the text of the file size next to the link? Here's my problem On one of my pages, I have a link to my resume which is a PDF file and to improve usability it's proper to have the file type and file size next to the link so the users have the option to decide if they want to click on that link or not. So the link would read something like "Download my resume (PDF / 80KB)" The problem is that I'm constantly updating my resume and uploading a new PDF file which, of course, has a different file size so I'm always going back to the HTML and changing the text to reflect the new file size. Is there a way to automate this with jQuery... or plain JavaScript for that matter? I found this script and made a demo here in Codepen but it doesn't seem to work. Any help with this would be greatly appreciated.

    Read the article

  • Setting the size of a ContentPane (inside of a JFrame)

    - by Jim
    Hello, I want to set the size of a JFrame such that the contentPane is the desired size. JFrame.setSize() doesn't take the window decorations into account, so the contentPane is slightly too small. The size of the window decorations are platform and theme specific, so it's bad news to try to manually account for them. JFrame.getContentPane().setSize() fails because it's managed. Ideas? Thanks!

    Read the article

  • set the width of a UIText view based on the content size property

    - by Mrwolfy
    TextView.contentSize.width does not work to set the UITextView's .frame.size.width. [TextView setFrame:CGRectMake(TextView.frame.origin.x, TextView.frame.origin.y, TextView.contentSize.width, TextView.contentSize.height)]; Setting the UITextView's frame height to the contentSize.height property works to make the view's frame scale to the proper size for the current vertical size of the content. For some reason, the width of the view's frame does not respond in the same way. It just remains the same size regardless of the amount of text input. When I log the contentsize of the UITextView dynamically, as I am typing in text to the view, the height property chnges, while the width does not. Makes me wonder what the width property is doing, what's it for.

    Read the article

  • Dataset size limit as an xml file.

    - by np
    Hi We are currently using DataSet for loading and saving our data to an xml file using Dataset and there is a good possibility that the size of the xml file could get very huge. Either way we are wondering if there is any limit on the size for an xml file so the Dataset would not run into any issues in the future due to the size of it. Please advise. Thanks N

    Read the article

  • Boost::asio::endpoint::size() and resize()

    - by p00ya
    hi. I was reading the boost endpoint documentation and saw size() and resize() member funcs. the documentation says: Gets the underlying size of the endpoint in the native type. what does this size represent and where can it be used/resized ? thanks.

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Dowload size of Streaming Videos

    - by Excalibur2000
    I would like to know that if a website advertises a streaming download as say 100MB, would my download to my computer be 100MB ? Would there be streaming control packets that a service provider would charge for over and above the 100MB content ? Assume the latest RealPlayer viewer. The rub for me is that I have downloaded MIT lectures and according to my file manager the file sizes have matched up to the download sizes on YouTube. However my ISP seems to think that the streams were larger and charged me for more than the file size of the download. I am left wondering where the data came from.

    Read the article

  • sort surname in alphabet from file to file JAVA

    - by user577939
    hello all. I need some help. I have wrote this code and dont now how to sort surnames in alphabet from my file to other file. import java.io.; import java.util.; class Asmuo { String pavarde; String vardas; long buvLaikas; int atv1; int atv2; int atv3; } class Irasas { Asmuo duom; Irasas kitas; } class Sarasas { private Irasas p; Sarasas() { p = null; } Irasas itrauktiElementa(String pv, String v, long laikas, int d0, int d1, int d2) { String pvrd, vrd; int data0; int data1; int data2; long lks; lks = laikas; pvrd = pv; vrd = v; data0 = d0; data1 = d1; data2 = d2; Irasas r = new Irasas(); r.duom = new Asmuo(); uzpildymasDuomenimis(r, pvrd, vrd, lks, d0, d1, d2); r.kitas = p; p = r; return r; } void uzpildymasDuomenimis(Irasas r, String pv, String v, long laik, int d0, int d1, int d2) { r.duom.pavarde = pv; r.duom.vardas = v; r.duom.atv1 = d0; r.duom.buvLaikas = laik; r.duom.atv2 = d1; r.duom.atv3 = d2; } void spausdinti() { Irasas d = p; int i = 0; try { FileWriter fstream = new FileWriter("rez.txt"); BufferedWriter rez = new BufferedWriter(fstream); while (d != null) { System.out.println(d.duom.pavarde + " " + d.duom.vardas + " " + d.duom.buvLaikas + " " + d.duom.atv1 + " " + d.duom.atv2 + " " + d.duom.atv3); rez.write(d.duom.pavarde + " " + d.duom.vardas + " " + d.duom.buvLaikas + " " + d.duom.atv1 + " " + d.duom.atv2 + " " + d.duom.atv3 + "\n"); d = d.kitas; i++; } rez.close(); } catch (Exception e) { System.err.println("Error: " + e.getMessage()); } } } public class Gyventojai { public static void main(String args[]) { Sarasas sar = new Sarasas(); Calendar atv = Calendar.getInstance(); Calendar isv = Calendar.getInstance(); try { FileInputStream fstream = new FileInputStream("duom.txt"); DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String eil; while ((eil = br.readLine()) != null) { String[] cells = eil.split(" "); String pvrd = cells[0]; String vrd = cells[1]; atv.set(Integer.parseInt(cells[2]), Integer.parseInt(cells[3]), Integer.parseInt(cells[4])); isv.set(Integer.parseInt(cells[5]), Integer.parseInt(cells[6]), Integer.parseInt(cells[7])); long laik = (isv.getTimeInMillis() - atv.getTimeInMillis()) / (24 * 60 * 60 * 1000); int d0 = Integer.parseInt(cells[2]); int d1 = Integer.parseInt(cells[3]); int d2 = Integer.parseInt(cells[4]); sar.itrauktiElementa(pvrd, vrd, laik, d0, d1, d2); } in.close(); } catch (Exception e) { System.err.println("Error: " + e.getMessage()); } sar.spausdinti(); } }

    Read the article

  • File Segmentation when trying to write in a file

    - by user1286390
    I am trying in C language to use the method of bisection to find the roots of some equation, however when I try to write every step of this process in a file I get the problem "Segmentation fault". This might be an idiot fault that I did, however I have been trying to solve this for a long time. I am compiling using gcc and that is the code: #include <stdio.h> #include <stdlib.h> #include <math.h> #define R 1.0 #define h 1.0 double function(double a); void attractor(double *a1, double *a2, double *epsilon); void attractor(double *a1, double *a2, double *epsilon) { FILE* bisection; double a2_copia, a3, fa1, fa2; bisection = fopen("bisection-part1.txt", "w"); fa1 = function(*a1); fa2 = function(*a2); if(function(*a1) - function(*a2) > 0.0) *epsilon = function(*a1) - function(*a2); else *epsilon = function(*a2) - function(*a1); fprintf(bisection, "a1 a2 fa1 fa2 epsilon\n"); a2_copia = 0.0; if(function(*a1) * function(*a2) < 0.0 && *epsilon >= 0.00001) { a3 = *a2 - (*a2 - *a1); a2_copia = *a2; *a2 = a3; if(function(*a1) - function(*a2) > 0.0) *epsilon = function(*a1) - function(*a2); else *epsilon = function(*a2) - function(*a1); if(function(*a1) * function(*a2) < 0.0 && *epsilon >= 0.00001) { fprintf(bisection, "%.4f %.4f %.4f %.4f %.4f\n", *a1, *a2, function(*a1), function(*a1), *epsilon); attractor(a1, a2, epsilon); } else { *a2 = a2_copia; *a1 = a3; if(function(*a1) - function(*a2) > 0) *epsilon = function(*a1) - function(*a2); else *epsilon = function(*a2) - function(*a1); if(function(*a1) * function(*a2) < 0.0 && *epsilon >= 0.00001) { fprintf(bisection, "%.4f %.4f %.4f %.4f %.4f\n", *a1, *a2, function(*a1), function(*a1), *epsilon); attractor(a1, a2, epsilon); } } } fa1 = function(*a1); fa2 = function(*a2); if(function(*a1) - function(*a2) > 0.0) *epsilon = function(*a1) - function(*a2); else *epsilon = function(*a2) - function(*a1); fprintf(bisection, "%.4f %.4f %.4f %.4f %.4f\n", a1, a2, fa1, fa2, epsilon); } double function(double a) { double fa; fa = (a * cosh(h / (2 * a))) - R; return fa; } int main() { double a1, a2, fa1, fa2, epsilon; a1 = 0.1; a2 = 0.5; fa1 = function(a1); fa2 = function(a2); if(fa1 - fa2 > 0.0) epsilon = fa1 - fa2; else epsilon = fa2 - fa1; if(epsilon >= 0.00001) { fa1 = function(a1); fa2 = function(a2); attractor(&a1, &a2, &epsilon); fa1 = function(a1); fa2 = function(a2); if(fa1 - fa2 > 0.0) epsilon = fa1 - fa2; else epsilon = fa2 - fa1; } if(epsilon < 0.0001) printf("Vanish at %f", a2); else printf("ERROR"); return 0; } Thanks anyway and sorry if this question is not suitable.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Fixing corrupt files or corrupt file table on a USB drive?

    - by Kelsey
    I was doing a virus scan on an external USB drive while copying data over to it. While AVG was scanning my system got locked up I think due to the USB drive running out of space and it required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directories but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. I think the file table or whatever it uses to store the index of what exists on the drive has been corrupted since it still shows the the drive as being almost full but everything I do a properties check on says it is 0 bytes. Does anyone know how to 'unlock' or recover this data? Is there a way to rebuild the file table somehow? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • Why Does DreamWeaver CS5 Discriminate between File Extensions, Even After Modding Mime Types!?

    - by Sam
    Hi folks, Even After I forced DreamWeaver CS5 to allow opening of .ast extensions as a MIME type of php5, which DreamWeaver now opens and colors correctly as described here, I still have trouble figuring out why it still discriminates between the two file extensions! Symptoms: External Files & Design View I have a file foo.php which php includes other files (e.g. the php-combined css.php and js.php). Now, when opening foo.php all functions work perfectly: the external (included) php files are all recognised correctly. However, when I change foo.php foo.ast, and open it again, It does not recognise the files extensions anymore in the top bar. Also, I lose the Design / Live View functionality.** When I change foo.ast to foo.php, all works again! Anyone any clues of why there remains a a difference between one and other extension? Note1: I have added the .ast extension to these four files, next to .php: 1 C:\Users\Sam\AppData\Local\VirtualStore\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 2 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\DocumentTypes\MMDocumentTypes.xml 3 C:\Users\Sam\AppData\Roaming\Adobe\Dreamweaver CS5\en_US\Configuration\Extensions.txt 4 C:\Program Files (x86)\Adobe\Adobe Dreamweaver CS5\configuration\Extensions.txt Note2: sometimes, even .php files do not want to show in design view or live view. Could this be caused by a corrupted installation?

    Read the article

  • Is there a POSIX pathname that can't name a file?

    - by Charles Stewart
    Are there any legal paths in POSIX that cannot be associated with a file, regular or irregular? That is, for which test -e "$LEGITIMATEPOSIXPATHNAME" cannot succeed? Clarification #1: pathnames By "legal paths in POSIX", I mean ones that POSIX says are allowed, not ones that POSIX doesn't explicitly forbid. I've looked this up, and the are POSIX specification calls them character strings that: Use only characters from the portable filename character set [a-zA-Z0-9._-] (cf. http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap03.html#tag_03_276); Do not begin with -; and Have length between 1 and NAME_MAX, a number unspecified for POSIX that is not less than 14. POSIX also allows that filesystems will probably be more relaxed than this, but it forbids the characters NUL and / from appearing in filenames. Note that such a paradigmatically UNIX filename as lost+found isn't FPF, according to this def. There's another constant PATH_MAX, whose use needs no further explanation. The ideal answer will use FPFs, but I'm interested in any example with filenames that POSIX doesn't expressly forbid. Clarification #2: impossibility Obviously, pathnames normally could be bound to a file. But UNIX semantics will tell you that there are special places that couldn't normally have arbitrary files created, like in the /dev directory. Are any such special places stipulated in POSIX? That is what the question is getting after.

    Read the article

  • Everyone can access my Windows 7 Homegroup file shares - Even Windows XP computers.

    - by adriangrigore
    Hi, I have 3 computers in my network, two running Windows 7 and one running Windows XP. I've set up a homegroup on both Windows 7 computers. Also, all computers are in the same Workgroup. The problem is that one of the Windows 7 computers makes all shares accessible to the entire Workgroup instead of just sharing to the Homegroup as it should be. I created the file share in Windows 7 via right-click in the explorer, then click on "Share For" - "Homegroup (Read/Write)" (translated from German, so the actual wording may be different). Also, when I look at the file sharing properties of that folder, Windows Explorer informs me that Users must have a valid account and password for this Computer to access drive shares. Unfortunately this is not true. Being in the same Workgroup is enough to get access. Homegroup restrictions work as expected on my other Windows 7 computer. When trying to browse those shares from the XP computer, I get a dialog asking for a login and password. What might cause homegroup restrictions to fail and how can I fix this?

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • Does anybody know of existing code to read a mork file (Thunderbird Address Book)?

    - by bruceatk
    I have the need to read the Thunderbird address book on the fly. It is stored in a file format called Mork. Not a pleasant file format to read. I found a 1999 article explaining the file format. I would love to know if someone already has gone through this process and could make the code available. I found mork.pl by Jamie Zawinski (he worked on Netscape Navigator), but I was hoping for a .NET solution. I'm hoping StackOverflow will come to the rescue, because this just seems like a waste of my time to write something to read this file format when it should be so simple. I love the comments that Jamie put in his perl script. Here is my favorite part: # Let me make it clear that McCusker is a complete barking lunatic. # This is just about the stupidest file format I've ever seen.

    Read the article

  • replace file with hardlink to another file atomically

    - by Ben Clifford
    I have two directory entries, a and b. Before, a and b point to different inodes. Afterwards, I want b to point to the same inode as a does. I want this to be safe - by which I mean if I fail somewhere, b either points to its original inode or the a inode. most especially I don't want to end up with b disappearing. mv is atomic when overwriting. ln appears to not work when the destination already exists. so it looks like i can say: ln a tmp mv tmp b which in case of failure will leave a 'tmp' file around, which is undesirable but not a disaster. Is there a better way to do this? (what I'm actually trying to do is replace files that have identical content with a single inode containing that content, shared between all directory entries)

    Read the article

  • Regex to match the first file in a rar archive file set in Python

    - by mridang
    I need to uncompress all the files in a directory and for this I need to find the first file in the set. I'm currently doing this using a bunch of if statements and loops. Can i do this this using regex? Here's a list of files that i need to match: yes.rar yes.part1.rar yes.part01.rar yes.part001.rar yes.r01 yes.r001 These should NOT be matched: no.part2.rar no.part02.rar no.part002.rar no.part011.rar no.r002 no.r02 I found a similar regex on this thread but it seems that Python doesn't support varible length lookarounds. A single line regex would be complicated but I'll document it well and it's not a problem. It's just one of those problems you beat your heap up, over. Thanks in advance guys. :)

    Read the article

  • Parse HTML file to grab all ID and Classes for a CSS file

    - by iamfriendly
    Hey all, A short while ago, I'm fairly certain I came across an application (or perhaps a plugin for Coda - the IDE I use) which quickly parses a html document and then spits out all of the elements with IDs and Classes for me to use in a CSS file. Having fully 'got into' zen coding - using the wonderful TEA plugin for Coda, I'm now hot on the heels of this app/plugin again. I've tried and failed miserably at hunting through Google, but have come up completely empty handed. Does anyone know of anything which can do this? Happy New Year everyone!

    Read the article

  • How get file names using OpenFileDialog in .NET (1000+ file multiselect)

    - by Cole
    Maybe some of you have come across this before.... I am opening files for parsing. I'm using OpenFileDialog, of course, but i'm limited to a buffer of 2048 on the .FileNames string. Thus, I can only select a few hundred files. This is OK for most cases. However, fore example, I have in one case 1400 files to open. Do you know a way to do this with the open file dialog. I just want the string array of .FileNames, I pass that to parser class. I was also thinking of offering a FolderBrowserDialog option and then I'd use some other method to just loop through all the files in a directory, like the DirectoryInfo class. I'd do this as a last resort if I can't have an all in one solution.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >