Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 45/201 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Javascript eval limits

    - by user117701
    Is there a limit to javascript's eval, like in lenght? I'm trying to build an app where you can store JS code in the DB, which you can later load and eval in order to execute it, but i'm reaching a limit. First of all, the code has to all be in one line. Any multiline statements are not executed. Next, i'm reaching a limit in length (i guess). If i execute the code manually, it works, but put that same code in the db, load it via ajax, and try to execute it, and it fails. Any ideas why?

    Read the article

  • What is a good standard for code width?

    - by BillyONeal
    Hello everyone :) I've heard in several places that it's bad to have code that is too wide onscreen. For example: for (std::vector<EnumServiceInformation>::const_iterator currentService = services.begin(); currentService != services.end(); currentService++) However, I've heard many arguments for 80 character wide limits. I'm assuming this 80 character limit comes from the traditional command prompt, which is typically 80 characters wide. However -- most of us are working on something much better than a typical command prompt, and I feel that using an 80 character limit encourages use of variable names that are far too short and do not describe what the variable is used for. What is a reasonable limit for a new project with no existing coding width standard?

    Read the article

  • select similar value from MySQL and order the result

    - by mathew
    how do I order this result?? $range = 5; // you'll be selecting around this range. $min = $rank - $range; $max = $rank + $range; $limit = 10; // max number of results you want. $result = mysql_query("select * from table where rank between $min and $max limit $limit"); while($row = mysql_fetch_array($result)) { echo $row['name']."&nbsp;-&nbsp;".$row['rank']."<br>"; }

    Read the article

  • MySQL "IS IN" equivalent?

    - by nute
    A while ago I worked on a MS-SQL project and I remember a "IS IN" thing. I tried it on a MySQL project and it did not work. Is there an equivalent? Workaround? Here is the full query I am trying to run: SELECT * FROM product_product, product_viewhistory, product_xref WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.id IS IN (SELECT DISTINCT pvh.productId FROM product_viewhistory AS pvh WHERE pvh.cookieId = :cookieId ORDER BY pvh.viewTime DESC LIMIT 10) AND product_viewhistory.cookieId = :cookieId AND product_product.outofstock='N' ORDER BY product_xref.hits DESC LIMIT 10 It's pretty big ... but the part I am interested in is: AND product_product.id IS IN (SELECT DISTINCT pvh.productId FROM product_viewhistory AS pvh WHERE pvh.cookieId = :cookieId ORDER BY pvh.viewTime DESC LIMIT 10) Which basically says I want the products to be in the "top 10" of that sub-query. How would you achieve that with MySQL (while trying to be efficient)?

    Read the article

  • Multiple HTTP request - Rails

    - by bradleyg
    My application checks a number of domains to see if they are valid (approx 100). I have the following code to check a single domain: def self.test_url uri, limit = 10 if limit == 0 return get_error_messages("001") end begin url = URI.parse(uri) response = Net::HTTP.start(url.host, url.port).request_head('/') rescue SocketError => e return get_error_messages("002") end case response when Net::HTTPRedirection then test_url(response['location'], limit - 1) else return get_error_messages(response.code) end end The code checks for the response code while taking into account redirects. This works fine. The only problem I have is when I put this in a loop I want it to run in parallel. So I don't have to wait for domain 1 to respond before I can request domain 2. I have managed this in PHP using curl_multi to run the requests in parallel. Is there a similar thing I can do in Rails?

    Read the article

  • postgres min function performance

    - by wutzebaer
    hi i need the lowest value for runnerId this query: SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ; takes 80ms (1968 result rows) this SELECT min("runnerId") FROM betlog WHERE "marketId" = '107416794' ; takes 1600ms is there a faster way to find the minimum, or should i calc the min in my java programm? "Result (cost=100.88..100.89 rows=1 width=0)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.00..100.88 rows=1 width=9)" " -> Index Scan using runneridindex on betlog (cost=0.00..410066.33 rows=4065 width=9)" " Index Cond: ("runnerId" IS NOT NULL)" " Filter: ("marketId" = 107416794::bigint)" CREATE INDEX marketidindex ON betlog USING btree ("marketId" COLLATE pg_catalog."default"); another idea SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" LIMIT 1 >1600ms SELECT "runnerId" FROM betlog WHERE "marketId" = '107416794' ORDER BY "runnerId" >>100ms how can a limit slow the query down?

    Read the article

  • Template or function arguments as implementation details in doxygen?

    - by Vincent
    In doxygen is there any common way to specify that some C++ template parameters of function parameters are implementation details and should not be specified by the user ? For example, a template parameter used as recursion level counter in metaprogramming technique or a SFINAE parameter in a function ? For example : /// \brief Do something /// \tparam MyFlag A flag... /// \tparam Limit Recursion limit /// \tparam Current Recursion level counter. SHOULD NOT BE EXPLICITELY SPECIFIED !!! template<bool MyFlag, unsigned int Limit, unsigned int Current = 0> myFunction(); Is there any doxygen normalized option equivalent to "SHOULD NOT BE EXPLICITELY SPECIFIED !!!" ?

    Read the article

  • Secure way to run other people code (sandbox) on my server?

    - by amikazmi
    I want to make a web service that run other people code locally... Naturally, I want to limit their code access to certain "sandbox" directory, and that they wont be able to connect to other parts of my server (DB, main webserver, etc) Whats the best way to do it? Run VMware/Virtualbox: (+) I guess it's as secure as it gets.. even if someone manage to "hack".. they only hack the guest machine (+) can limit the cpu & memory the process uses (+) easy to setup.. just create the VM (-) harder to "connect" the sandbox directory from the host to the guest (-) wasting extra memory and cpu for managing the VM Run underprivileged user: (+) doesnt waste extra resources (+) sandbox directory is just a plain directory (?) cant limit cpu and memory? (?) dont know if it's secure enough... Any other way? Server running Fedora Core 8, the "other" codes written in Java & C++

    Read the article

  • Why my mysql DISTINCT doesn't work ?

    - by belaz
    Hello, Why the two query below return duplicate member_id and not the third ? i need the second query to work with distinct. Anytime i run a GROUP BY, this query is incredibly slow and the resultset doesn't return the same value as distinct (the value is wrong). SELECT member_id, id FROM ( SELECT * FROM table1 ORDER BY created_at desc ) as u LIMIT 5 +-----------+--------+ | member_id | id | +-----------+--------+ | 11333 | 313095 | | 141831 | 313094 | | 141831 | 313093 | | 12013 | 313092 | | 60821 | 313091 | +-----------+--------+ SELECT distinct member_id, id FROM ( SELECT * FROM table1 ORDER BY created_at desc ) as u LIMIT 5 +-----------+--------+ | member_id | id | +-----------+--------+ | 11333 | 313095 | | 141831 | 313094 | | 141831 | 313093 | | 12013 | 313092 | | 60821 | 313091 | +-----------+--------+ SELECT distinct member_id FROM ( SELECT * FROM table1 ORDER BY created_at desc ) as u LIMIT 5 +-----------+ | member_id | +-----------+ | 11333 | | 141831 | | 12013 | | 60821 | | 64980 | +-----------+ my table sample CREATE TABLE `table1` ( `id` int(11) NOT NULL AUTO_INCREMENT, `member_id` int(11) NOT NULL, `s_type_id` int(11) NOT NULL, `created_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `s_FI_1` (`member_id`), KEY `s_FI_2` (`s_type_id`) ) ENGINE=InnoDB AUTO_INCREMENT=313096 DEFAULT CHARSET=utf8;

    Read the article

  • mysql result for pagination

    - by Reteras Remus
    The query is: SELECT * FROM `news` ORDER BY `id` LIMIT ($curr_page * 5), ( ($curr_page * 5) + 5 ) Where $curr_page is a php variable which is getting a value from $_GET['page'] I want to make a pagination (5 news on each page), but I don't know why the mysql is returning me extra values. On the first page the result ok: $curr_page = 0 The query would be: SELECT * FROM `news` ORDER BY `id` LIMIT 0, 5 But on the second page, the result from the query is adding extra news, 10 instead of 5. The query on the second page: SELECT * FROM `news` ORDER BY `id` LIMIT 5, 10 Whats wrong? Why the result has 10 values instead of 5? Thank you!

    Read the article

  • More than 100,000 articles !

    - by developerit
    In one month, we already got more than 100,000, and we continue to crawl! We plan on hitting 250,000 total articles next month. Due to the large amount of data we are gathering, we are planning on updating our SQL stored procedure to improve performance. We may be migrating to SQL Server 2008 Entreprise, as we are currently running on SQL Server 2005 Express Edition… We are at 400 Mb of data, getting more and more close to the 2 Gb limit. Stay tune for more info and browse daily fresh articles about web development.

    Read the article

  • SQL SERVER – Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage

    - by pinaldave
    I have mentioned several times on this blog that the best part of blogging is the questions I receive from readers. They are often very interesting. The questions from readers give me a good idea what other readers might be thinking as well. After reading my earlier article Simple Example to Configure Resource Governor – Introduction to Resource Governor – I received an email from a reader and we exchanged a few emails. After exchanging emails we both figured out what is going on. It was indeed interesting and reader suggested to that I should blog about it.  I asked for permission to publish his name but he does not like the attention so we will just call him Jeff. I have converted our emails into chat for easy consumption. Jeff: Your script does not work at all. I think either there is a bug in SQL Server. Pinal: Would you please explain in detail? Jeff: Your code does not limit the CPU usage? Pinal: How did you measure it? Jeff: Well, we have third party tools for it but let us say I have limited the resources for Reporting Services and used your script described in your blog. After that I ran only reporting service workload the CPU is still used more than 100% and it is not limited to 30% as described in your script. Clearly something is wrong somewhere. Pinal: Did you say you ONLY ran reporting server load? Jeff: Yeah, to validate I ran ONLY reporting server load and CPU did not throttle at 30% as per your script. Pinal: Oh! I get it here is the answer - CAP_CPU_PERCENT = 30. Use it. Jeff: What is that, I think your earlier script says it will throttle the Reporting Service workload and Application/OLTP workload and balance it. Pinal: Exactly, that is correct. Jeff: You need to write more in email buddy! Just like your blogs, your answers do not make sense! No Offense! Pinal: Hmm…feedback well taken. Let me try again. In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Configuration: [Read Earlier Post] Reporting Workload: MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30 Application/OLTP Workload: MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100 Example 1: If there is only Reporting Workload on the server: SQL Server will not limit usage of CPU to only 30% workload but SQL Server instance will use all available CPU (if needed). In another word in this scenario it will use more than 30% CPU. Example 2: If there is Reproting Workload and heavy Application/OLTP workload: SQL Server will allocate a maximum of 30% CPU resources to Reporting Workload and allocate remaining resources to heavy application/OLTP workload. The reason for this enhancement is for better utilization of the resources. Let us think, if there is only single workload, which we have limited to max CPU usage to 30%. The other unused available CPU resources is now wasted. In this situation SQL Server allows the workload to use more than 30% resources leading to overall improved/optimized performance. However, in the case of multiple workload where lots of resources are needed the limits specified in MAX_CPU_PERCENT are acknowledged. Example 3: If there is a situation where the max CPU workload has to be enforced: This is a very interesting scenario, in the case when the max CPU workload has to be enforced irrespective of the workload and enhanced algorithm, the keyword CAP_CPU_PERCENT is essential. It specifies a hard cap on the CPU bandwidth that all requests in the resource pool will receive. It will never let CPU usage for reporting workload to go over 30% in our case. You can use the key word as follows: -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, CAP_CPU_PERCENT=40, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO Notice that there is MAX_CPU_PERCENT=30 and CAP_CPU_PERCENT=40, what it means is that when SQL Server Instance is under heavy load under different workload it will use the maximum CPU at 30%. However, when the SQL Server instance is not under workload it will go over the 30% limit. However, as CAP_CPU_PERCENT is set to 40, it will not go over 40% in any case by limiting the usage of CPU. CAP_CPU_PERCENT puts a hard limit on the resources usage by workload. Jeff: Nice Pinal, you should blog about it. [A day passes by] Pinal: Jeff, it is done! Click here to read it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Service Broker

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • Website Access...DNS, ISP, issue?

    - by sublet
    This isn't so much a code issue as it might be an issue with my ISP. For some reason when I visit a site very often, like one I manage or write stories on, it will just stop pulling data down after a while. It's very random when it happens, but probably happens once a week. If effects everyone who is accessing the site from this connection, and I can access other sites no problem. Also, if I go outside the office back home, which is right down the street, and access the site it is fine. I'm using Comcast in both locations. It's almost as if I have a limit on requests to each site and have hit my limit so it blocks the site for a while. Anybody have any clue what this might be?

    Read the article

  • Improve Your Google Search Skills [Infographic]

    - by Jason Fitzpatrick
    Don’t limit yourself to just plugging in simple search terms to Google; check out this infographic and learn a search string search or two. You don’t need to limit yourself to searching just for simple strings; Google supports all manner of handy search tricks. If you want to search just HowToGeek.com’s archive of XBMC articles, for example, you can plug in site:howtogeek.com XBMC to search our site. Get More Out of Google [HackCollege via Mashable] How to See What Web Sites Your Computer is Secretly Connecting To HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast!

    Read the article

  • Should I implement slugs with my already fairly long URLs?

    - by Earlz
    I'm considering implementing slugs in my blog. My blog uses MongoDB. One of the side-effects of using MongoDB is that it uses relatively long hex string IDs. Example before: http://lastyearswishes.com/blog/view/5070f025d1f1a5760fdfafac after: http://lastyearswishes.com/blog/view/5070f025d1f1a5760fdfafac/improvements-on-barelymvc Of course, that's a relatively short title.. I have some longer ones, but intend to limit the maximum character limit for slugs to something reasonable. At what point does a URL become so long that it hurts SEO instead of improves it? In this case, should I leave my URLs alone, or add slugs?

    Read the article

  • Subterranean IL: The ThreadLocal type

    - by Simon Cooper
    I came across ThreadLocal<T> while I was researching ConcurrentBag. To look at it, it doesn't really make much sense. What's all those extra Cn classes doing in there? Why is there a GenericHolder<T,U,V,W> class? What's going on? However, digging deeper, it's a rather ingenious solution to a tricky problem. Thread statics Declaring that a variable is thread static, that is, values assigned and read from the field is specific to the thread doing the reading, is quite easy in .NET: [ThreadStatic] private static string s_ThreadStaticField; ThreadStaticAttribute is not a pseudo-custom attribute; it is compiled as a normal attribute, but the CLR has in-built magic, activated by that attribute, to redirect accesses to the field based on the executing thread's identity. TheadStaticAttribute provides a simple solution when you want to use a single field as thread-static. What if you want to create an arbitary number of thread static variables at runtime? Thread-static fields can only be declared, and are fixed, at compile time. Prior to .NET 4, you only had one solution - thread local data slots. This is a lesser-known function of Thread that has existed since .NET 1.1: LocalDataStoreSlot threadSlot = Thread.AllocateNamedDataSlot("slot1"); string value = "foo"; Thread.SetData(threadSlot, value); string gettedValue = (string)Thread.GetData(threadSlot); Each instance of LocalStoreDataSlot mediates access to a single slot, and each slot acts like a separate thread-static field. As you can see, using thread data slots is quite cumbersome. You need to keep track of LocalDataStoreSlot objects, it's not obvious how instances of LocalDataStoreSlot correspond to individual thread-static variables, and it's not type safe. It's also relatively slow and complicated; the internal implementation consists of a whole series of classes hanging off a single thread-static field in Thread itself, using various arrays, lists, and locks for synchronization. ThreadLocal<T> is far simpler and easier to use. ThreadLocal ThreadLocal provides an abstraction around thread-static fields that allows it to be used just like any other class; it can be used as a replacement for a thread-static field, it can be used in a List<ThreadLocal<T>>, you can create as many as you need at runtime. So what does it do? It can't just have an instance-specific thread-static field, because thread-static fields have to be declared as static, and so shared between all instances of the declaring type. There's something else going on here. The values stored in instances of ThreadLocal<T> are stored in instantiations of the GenericHolder<T,U,V,W> class, which contains a single ThreadStatic field (s_value) to store the actual value. This class is then instantiated with various combinations of the Cn types for generic arguments. In .NET, each separate instantiation of a generic type has its own static state. For example, GenericHolder<int,C0,C1,C2> has a completely separate s_value field to GenericHolder<int,C1,C14,C1>. This feature is (ab)used by ThreadLocal to emulate instance thread-static fields. Every time an instance of ThreadLocal is constructed, it is assigned a unique number from the static s_currentTypeId field using Interlocked.Increment, in the FindNextTypeIndex method. The hexadecimal representation of that number then defines the specific Cn types that instantiates the GenericHolder class. That instantiation is therefore 'owned' by that instance of ThreadLocal. This gives each instance of ThreadLocal its own ThreadStatic field through a specific unique instantiation of the GenericHolder class. Although GenericHolder has four type variables, the first one is always instantiated to the type stored in the ThreadLocal<T>. This gives three free type variables, each of which can be instantiated to one of 16 types (C0 to C15). This puts an upper limit of 4096 (163) on the number of ThreadLocal<T> instances that can be created for each value of T. That is, there can be a maximum of 4096 instances of ThreadLocal<string>, and separately a maximum of 4096 instances of ThreadLocal<object>, etc. However, there is an upper limit of 16384 enforced on the total number of ThreadLocal instances in the AppDomain. This is to stop too much memory being used by thousands of instantiations of GenericHolder<T,U,V,W>, as once a type is loaded into an AppDomain it cannot be unloaded, and will continue to sit there taking up memory until the AppDomain is unloaded. The total number of ThreadLocal instances created is tracked by the ThreadLocalGlobalCounter class. So what happens when either limit is reached? Firstly, to try and stop this limit being reached, it recycles GenericHolder type indexes of ThreadLocal instances that get disposed using the s_availableIndices concurrent stack. This allows GenericHolder instantiations of disposed ThreadLocal instances to be re-used. But if there aren't any available instantiations, then ThreadLocal falls back on a standard thread local slot using TLSHolder. This makes it very important to dispose of your ThreadLocal instances if you'll be using lots of them, so the type instantiations can be recycled. The previous way of creating arbitary thread-static variables, thread data slots, was slow, clunky, and hard to use. In comparison, ThreadLocal can be used just like any other type, and each instance appears from the outside to be a non-static thread-static variable. It does this by using the CLR type system to assign each instance of ThreadLocal its own instantiated type containing a thread-static field, and so delegating a lot of the bookkeeping that thread data slots had to do to the CLR type system itself! That's a very clever use of the CLR type system.

    Read the article

  • Shared hosting banwidth limits

    - by mike
    I have a shared hosting account with a 20GB monthly bandwidth limit. I have exceeded my monthly limit and according to my host my counter is never reset, they say they use a continuous 30 day counter. So for example, I make payment on the 1st of each month, say I use 20GB in the last week of the month. My bandwidth counter is not reset on the 1st of the new month and my bandwidth will only become available in the last week of the new month. Is this common practice by shared hosting companies? Sounds a bit shady to me. Surely my counters should be reset on the 1st of every month when I make payment and 20GB of bandwidth should be available from the day payment is made?

    Read the article

  • Track those visitors who come through a particular link

    - by busybee235
    I want to track visitors who come to my site through a particular link. For example, those visitors coming from http://www.domain.com/abc123, I can get their pageviews, time on site, bounce rate, referrer pages per visit etc. After that I can store that info into by database on daily basis. Can anyone suggest any service or api or any software for the same? I have used Google Analytics utm tags that work straight well for my requirement but I don't know how many links I can track with it. I have around 80-100 links to track a day and the number of links will be increasing. I couldn't find any documentation regarding limit of campaigns in GA. If there's no such limit, I can start this project. Thanks

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • Is it possible to sync two computers without storing the files on a server?

    - by William
    I have a family member currently using Windows live mesh to sync a relatively large amount of files between computers. It is way over the Ubuntu One 5 GB limit and the Live Mesh 2 GB limit. However, Live Mesh gives him the options of syncing all the data he wants without storing it on Microsoft's servers. Does Ubuntu One have an equivalent option, performing just the sync computer-to-computer and not computer-to-server and server-to-computer? Do you have other recommendations? It does not necessarily have to be Ubuntu One, but I need it to be cross platform, working across Windows and Ubuntu. We also have computers outside of uour home network we need to sync to. This is one of the few things keeping him from switching to Ubuntu, and I'd be very grateful for any help.

    Read the article

  • Azure

    - by Grant Fritchey
    I've been tasked to learn SQL Azure, as well as test all the Red Gate products on it. My one, BIG, fear has been that I'll receive some mongo bill in the mail because I've exceeded the MSDN testing limit. I know people that have had that problem. I've been trying to keep an eye on my usage, but, let's face it, it's not something I think about every day. But now I don't have to. Red Gate has been working with Azure, long before I showed up. They already released a little piece of software that I just found out about, it's called CloudTally. It gathers your usage and sends you a daily email so you can know if you're starting to approach that limit. Check it out, it's free.

    Read the article

  • What is required to create local business rich-snippets complete with sitelinks AND breadcrumbs?

    - by Felix
    I have a local business directory site. I would like to markup my business listing 'profile' level pages for display as enhanced listings/rich-snippets complete with business names, addresses and phone numbers. I would also like to display site-links and path-based breadcrumbs to help users navigate site directory hierarchy (which is deep). Is there a limit to the amount of breadcrumbs a site can leave? Is there a separate limit on the number of breadcrumbs which Google/Bing will display in the SERP? What kind of markup language(s) would be needed to best position my site to show site-links AND breadcrumbs? For example: Find a business Browse by Location State City Zip or Find a business Choose Service Browse by location State City Thanks all!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >