Search Results

Search found 1228 results on 50 pages for 'comparing'.

Page 35/50 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • NHibernate 2 Beginner's Guide Review

    - by Ricardo Peres
    OK, here's the review I promised a while ago. This is a beginner's introduction to NHibernate, so if you have already some experience with NHibernate, you will notice it lacks a lot of concepts and information. It starts with a good description of NHibernate and why would we use it. It goes on describing basic mapping scenarios having primary keys generated with the HiLo or Identity algorithms, without actually explaining why would we choose one over the other. As for mapping, the book talks about XML mappings and provides a simple example of Fluent NHibernate, comparing it to its XML counterpart. When it comes to relations, it covers one-to-many/many-to-one and many-to-many, not one-to-one relations, but only talks briefly about lazy loading, which is, IMO, an important concept. Only Bags are described, not any of the other collection types. The log4net configuration description gets it's own chapter, which I find excessive. The chapter on configuration merely lists the most common properties for configuring NHibernate, both in XML and in code. Querying only talks about loading by ID (using Get, not Load) and using Criteria API, on which a paging example is presented as well as some common filtering options (property equals/like/between to, no examples on conjunction/disjunction, however). There's a chapter fully dedicated to ASP.NET, which explains how we can use NHibernate in web applications. It basically talks about ASP.NET concepts, though. Following it, another chapter explains how we can build our own ASP.NET providers using NHibernate (Membership, Role). The available entity generators for NHibernate are referred and evaluated on a chapter of their own, the list is fine (CodeSmith, nhib-gen, AjGenesis, Visual NHibernate, MyGeneration, NGen, NHModeler, Microsoft T4 (?) and hbm2net), examples are provided whenever possible, however, I have some problems with some of the evaluations: for example, Visual NHibernate scores 5 out of 5 on Visual Studio integration, which simply does not exist! I suspect the author means to say that it can be launched from inside Visual Studio, but then, what can't? Finally, there's a chapter I really don't understand. It seems like a bag where a lot of things are thrown in, like NHibernate Burrow (which actually isn't explained at all), Blog.Net components, CSS template conversion and web.config settings related to the maximum request length for file uploads and ending with XML configuration, with the help of GhostDoc. Like I said, the book is only good for absolute beginners, it does a fair job in explaining the very basics, but lack a lot of not-so-basic concepts. Among other things, it lacks: Inheritance mapping strategies (table per class hierarchy, table per class, table per concrete class) Load versus Get usage Other usefull ISession methods First level cache (Identity Map pattern) Other collection types other that Bag (Set, List, Map, IdBag, etc Fetch options User Types Filters Named queries LINQ examples HQL examples And that's it! I hope you find this review useful. The link to the book site is https://www.packtpub.com/nhibernate-2-x-beginners-guide/book

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • UPK Customer Success Story: The City and County of San Francisco

    - by karen.rihs(at)oracle.com
    The value of UPK during an upgrade is a hot topic and was a primary focus during our latest customer roundtable featuring The City and County of San Francisco: Leveraging UPK to Accelerate Your PeopleSoft Upgrade. As the Change Management Analyst for their PeopleSoft 9.0 HCM project (Project eMerge), Jan Crosbie-Taylor provided a unique perspective on how they're utilizing UPK and UPK pre-built content early on to successfully manage change for thousands of city and county employees and retirees as they move to this new release. With the first phase of the project going live next September, it's important to the City and County of San Francisco to 1) ensure that the various constituents are brought along with the project team, and 2) focus on the end user aspects of the implementation, including training. Here are some highlights on how UPK and UPK pre-built content are helping them accomplish this: As a former documentation manager, Jan really appreciates the power of UPK as a single source content creation tool. It saves them time by streamlining the documentation creation process, enabling them to record content once, then repurpose it multiple times. With regard to change management, UPK has enabled them to educate the project team and gain critical buy in and support by familiarizing users with the application early on through User Experience Workshops and by promoting UPK at meetings whenever possible. UPK has helped create awareness for the project, making the project real to users. They are taking advantage of UPK pre-built content to: Educate the project team and subject matter experts on how PeopleSoft 9.0 works as delivered Create a guide/storyboard for their own recording Save time/effort and create consistency by enhancing their recorded content with text and conceptual information from the pre-built content Create PeopleSoft Help for their development databases by publishing and integrating the UPK pre-built content into the application help menu Look ahead to the next release of PeopleTools, comparing the differences to help the team evaluate which version to use with their implemtentation When it comes time for training, they will be utilizing UPK in the classroom, eliminating the time and cost of maintaining training databases. Instructors will be able to carry all training content on a thumb drive, allowing them to easily provide consistent training at their many locations, regardless of the environment. Post go-live, they will deploy the same UPK content to provide just-in-time, in-application support for the entire system via the PeopleSoft Help menu and their PeopleSoft Enterprise Portal. Users will already be comfortable with UPK as a source of help, having been exposed to it during classroom training. They are also using UPK for a non-Oracle application called JobAps, an online job application solution used by many government organizations. Jan found UPK's object recognition to be excellent, yet it's been incredibly easy for her to change text or a field name if needed. Please take time to listen to this recording. The City and County of San Francisco's UPK story is very exciting, and Jan shared so many great examples of how they're taking advantage of UPK and UPK pre-built content early on in their project. We hope others will be able to incorporate these into their projects. Many thanks to Jan for taking the time to share her experiences and creative uses of UPK with us! - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Silverlight Cream for January 03, 2011 -- #1021

    - by Dave Campbell
    In this all-Submittal Issue: Gill Cleeren(-2-), Brian Noyes, Brian Genisio, René Schulte, and Andy Schwam(-2-). Above the Fold: Silverlight: "The INavigationContentLoader interface in Silverlight 4" Gill Cleeren WP7: "Sending Windows Phone Screenshots in an Email" René Schulte WCF RIA Services: "WCF RIA Services Part 10 - Exposing Domain Services To Other Clients" Brian Noyes Shoutouts: Want to know what it takes to be an MVP? Check out René Schulte's recap of 2010: Goodbye 2010 - Hello 2011 ... awesome, René! Rui Marinho sent me this post... it's WPF, but wow... WPF and Kinect! Kinect & WPF From SilverlightCream.com: The INavigationContentLoader interface in Silverlight 4 Gill Cleeren has a couple posts up... this first is a break-out of the INavigationContentLoader... what all can be done with it, in addition to the flow of the page load process broken out. Working with the RaiseCanExecuteChanged in MVVM Light (Silverlight) Gill Cleeren' latest post is a discussion of the Silverlight ICommand interface and Laurent Bugnion's RaiseCanExecuteChanged in MVVM Light, with example code. WCF RIA Services Part 10 - Exposing Domain Services To Other Clients Brian Noyes has Part 10 in his WCF RIA Services Tutorial series up at SilverlightShow ... with info on, for example, exposint an OData, SOAP, or REST/JSON endpoint, or how to consume them. Cross-Training in Silverlight & Flex–MVVM vs Presentation Model Brian Genisio finished the year off with this post in his on-going Silverlight/Flex seris comparing MVVM vs Presentation Model .. lots of good MVVM/ViewModel tips and code in this post. Sending Windows Phone Screenshots in an Email René Schulte is the perfect guy to be doing this... how about emailing a screenshot directly from inside an app, for instance Laurent's taking a screenshot from inside an app... too cool, Rene! Windows Phone 7 Application Development Tips Andy Schwam has a post up with tips he learned while creating his first WP7 app... lots of good tips, Gestures, Camera, ISO... check it out, could save you some time and tears :) WP7 Tip: Using the CameraCaptureTask for Windows Phone 7 Andy Schwam's most recent post is WP7 dev as well, and has a bunch of tips and code for using the camera, such as capturing an image, resizing, saving... good stuff. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for March 15, 2011 -- #1061

    - by Dave Campbell
    In this Issue: Peter Kuhn, Emil Stoychev, Viktor Larsson(-2-), Kevin Hoffman, Rudi Grobler, WindowsPhoneGeek, Jesse Liberty(-2-), and Martin Krüger. Above the Fold: Silverlight: "Image comparison using a GridSplitter" Martin Krüger WP7: "Using WP7 accent color effectively" Viktor Larsson XNA: "XNA for Silverlight developers: Part 7 - Collision detection" Peter Kuhn From SilverlightCream.com: XNA for Silverlight developers: Part 7 - Collision detection Peter Kuhn has part 7 of his XNA for Silverlight devs tutorial series up at SilverlightShow... discussing Collision detection... something you need to get your head around if you're going to do a game. Interview with John Papa about the upcoming MIX11 event and the Open Source Fest Emil Stoychev of SilverlightShow reverses the roles with John Papa and interviews John on this MIX11 and Open Source Fest discussion they had at the MVP Summit Debugging Videos or Camera in WP7 Viktor Larsson has a quick post up on the 3 ways of debugging a WP7 app and why and under what circumstances you should change debug method. Using WP7 accent color effectively Viktor Larsson's next post is about the 10 accent colors available on WP7 devices. He shows how to make best use of that capability in XAML and runtime code. WP7 for iPhone and Android Developers - Hardware and Device Services Kevin Hoffman's part 4 of a 12-part tutorial series at SilverlightShow on WP7 for iPhone/Android devs is up ... this oe concentrates on Hardware and Device Services... Launchers/Choosers/Sensors. How to publish WP7 applications if you live in the Middle-east & Africa region Rudi Grobler has a short post up on a legit way to publish WP7 apps if you are in the MEA region. Creating WP7 Custom Theme – Sample Theme Implementation WindowsPhoneGeek has a new post up and he's starting a series of 3 articles on Creating Wp7 Custom Themes... first up is this tutorial on Basic Theme Implementation... and use it as well. From Android to Windows Phone For "Windows Phone from Scratch #43", Jesse Liberty begins a series on moving apps from Android to WP7, beginning with a tip calculating program. Yet Another Podcast #28–Jeremy Likness Jesse Liberty's next post is his "Yet Another Podcast #28" with Jeremy Likness this time around... the list of all things fun that Jeremy's involved in is getting long... should be a good podcast! Image comparison using a GridSplitter Martin Krüger posted a cool 'Clip Splitter' for comparing images, and what a great set of example images he's using... pretty darn cool lining them up with a grid-splitter. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How to internally rewrite a page when requested from specific HTTP_HOST

    - by Andy
    Hi all, I have a Drupal site, site.com, and our client has a campaign that they're promoting for which they've bought a new domain name, campaign.com. I'd like it so that a request for campaign.com internally rewrites to a particular page of the Drupal site. Note Drupal uses an .htaccess file in the document root. The normal Drupal rewrite is # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] I added the following before the normal rewrite. # Custom URLS (eg. microsites) go here RewriteCond %{HTTP_HOST} =campaign.com RewriteCond %{REQUEST_URI} =/ RewriteRule ^ index.php?q=node/22 [L] Unfortunately it doesn't work, it just shows the homepage. Turning on the rewrite log I get this. 1. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/ - 2. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] applying pattern '^' to uri '' 3. [rid#2da8ea8/initial] (2) [perdir D:/wamp/www/] rewrite '' - 'index.php?q=node/22' 4. [rid#2da8ea8/initial] (3) split uri=index.php?q=node/22 - uri=index.php, args=q=node/22 5. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] add per-dir prefix: index.php - D:/wamp/www/index.php 6. [rid#2da8ea8/initial] (2) [perdir D:/wamp/www/] strip document_root prefix: D:/wamp/www/index.php - /index.php 7. [rid#2da8ea8/initial] (1) [perdir D:/wamp/www/] internal redirect with /index.php [INTERNAL REDIRECT] 8. [rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/index.php - index.php 9. [rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] applying pattern '^' to uri 'index.php' 10.[rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/index.php - index.php 11.[rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] applying pattern '^(.*)$' to uri 'index.php' 12.[rid#2da7770/initial/redir#1] (1) [perdir D:/wamp/www/] pass through D:/wamp/www/index.php I'm not used to mod_rewrite, so I might be missing something, but comparing the logs from a call to http://site.com/node/3 and from http://campaign.com/ I can't see any meaningful difference. Specifically uri and args on line 4 seem correct, the internal redirect on line 7 seems right, and the pass through on line 12 seems right (because the file index.php exists). But for some reason it seems the query string's been discarded/ignored around the time of the internal redirect. I'm completely stumped. Also, if anyone could provide a reference on understanding the rewrite log, that might help. It'd be great if there's a way to track the query string through the internal redirect. FWIW I'm using WampServer 2.1 with Apache 2.2.17.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Live Debugging

    - by Daniel Moth
    Based on my classification of diagnostics, you should know what live debugging is NOT about - at least according to me :-) and in this post I'll share how I think of live debugging. These are the (outer) steps to live debugging Get the debugger in the picture. Control program execution. Inspect state. Iterate between 2 and 3 as necessary. Stop debugging (and potentially start new iteration going back to step 1). Step 1 has two options: start with the debugger attached, or execute your binary separately and attach the debugger later. You might say there is a 3rd option, where the app notifies you that there is an issue, referred to as JIT debugging. However, that is just a variation of the attach because that is when you start the debugging session: when you attach. I'll be covering in future posts how this step works in Visual Studio. Step 2 is about pausing (or breaking) your app so that it makes no progress and remains "frozen". A sub-variation is to pause only parts of its execution, or in other words to freeze individual threads. I'll be covering in future posts the various ways you can perform this step in Visual Studio. Step 3, is about seeing what the state of your program is when you have paused it. Typically it involves comparing the state you are finding, with a mental picture of what you thought the state would be. Or simply checking invariants about the intended state of the app, with the actual state of the app. I'll be covering in future posts the various ways you can perform this step in Visual Studio. Step 4 is necessary if you need to inspect more state - rinse and repeat. Self-explanatory, and will be covered as part of steps 2 & 3. Step 5 is the most straightforward, with 3 options: Detach the debugger; terminate your binary though the normal way that it terminates (e.g. close the main window); and, terminate the debugging session through your debugger with a result that it terminates the execution of your program too. In a future post I'll cover the ways you can detach or terminate the debugger in Visual Studio. I found an old picture I used to use to map the steps above on Visual Studio 2010. It is basically the Debug menu with colored rectangles around each menu mapping the menu to one of the first 3 steps (step 5 was merged with step 1 for that slide). Here it is in case it helps: Stay tuned for more... Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Where and how to mention Stackoverflow participation in the résumé?

    - by Sandeepan Nath
    I think I have good enough reputation on SO now - here is my profile - http://stackoverflow.com/users/351903/sandeepan-nath. Well, this may not be that much as compared to so many other users out there but I am happy with mine. So, I was thinking of adding my profile link on my résumé. (Just the profile link and not that "I have this much reputation on SO"). Those who haven't seen, can see this question Would you put your stackoverflow profile link on your CV / Resume?. How would this look like? Forums/Blogs/Miscellaneous others No blogging as yet but active participant in Stackoverflow. My profile link - http://stackoverflow.com/users/351903/sandeepan-nath I think of putting this section after Project Details and Technical Expertise sections. Any tips/advice? Thanks Update MKO has made a very good point - "do you really want a potential employeer to be able to evaluate in detail everything you've ever written on SO". I thought of commenting but it would be too long - In my questions/answers I put a lot of statements like - "AFAIK ...", "following are my assumptions so far ...", "am I correct to conclude that... ?", "I doubt if it is possible to ..." etc. when I am not sure about something and I rarely involve in fights with other users. However I do argue on topics sometimes if I feel it is necessary and if I have a valid point. I do accept my mistakes and apologize for the same. As we all know nobody is perfect. I must have written many things which may be judged as wrong by a potential employer. But what if the same employer notices that I have improved in the quality of content by comparing old content with new one? Isn't that great? I also try to go back to older questions/answers and put corrective comments etc. when I feel I was wrong or if I can improve my post. Of course there are many employers who want you (potential employees) to be correct each and every time. They immediately remove you from consideration when you say a single incorrect thing. I have personally met such an interviewer few months back. He didn't even care to listen to any good thing I had done after he found a single wrong thing. Now the question is do you really care to work with such people? Or do you like those people who give value to the fact that you are striving to improve every day. I personally prefer the latter.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Can't install "cedar trail drm driver in DKMS format" on Ubuntu 12.04

    - by Mychal Phillip Segala Sajulga
    Ubuntu 12.04 32bit ... Toshiba NB520 *side-note, this computer is so slow even with a 2gbram; far better than my emachine and neo laptop. I think this is the answer: driver. /var/log/jockey.log 2013-09-19 05:29:36,773 DEBUG: Comparing 3.8.0-29 with 2013-09-19 05:32:45,094 DEBUG: updating <jockey.detection.LocalKernelModulesDriverDB instance at 0x8427a0c> 2013-09-19 05:32:50,861 DEBUG: reading modalias file /lib/modules/3.8.0-29-generic/modules.alias 2013-09-19 05:32:56,240 DEBUG: reading modalias file /usr/share/jockey/modaliases/b43 2013-09-19 05:32:56,265 DEBUG: reading modalias file /usr/share/jockey/modaliases/disable-upstream-nvidia 2013-09-19 05:32:56,474 DEBUG: loading custom handler /usr/share/jockey/handlers/dvb_usb_firmware.py 2013-09-19 05:32:56,791 DEBUG: Instantiated Handler subclass __builtin__.DvbUsbFirmwareHandler from name DvbUsbFirmwareHandler 2013-09-19 05:32:56,792 DEBUG: Firmware for DVB cards not available 2013-09-19 05:32:56,793 DEBUG: loading custom handler /usr/share/jockey/handlers/cdv.py 2013-09-19 05:32:56,927 WARNING: modinfo for module cedarview_gfx failed: ERROR: modinfo: could not find module cedarview_gfx 2013-09-19 05:32:58,213 DEBUG: linux-lts-raring installed: True linux-lts-saucy installed: False linux minor version: 8 xserver ABI: 13 xserver-lts-quantal: False 2013-09-19 05:32:58,214 DEBUG: Instantiated Handler subclass __builtin__.CdvDriver from name CdvDriver 2013-09-19 05:32:58,214 DEBUG: cdv.available: falling back to default 2013-09-19 05:32:58,685 DEBUG: XorgDriverHandler(cedarview_gfx, cedarview-graphics-drivers, None): Disabling as package video ABI(s) xorg-video-abi-11 not compatible with X.org video ABI xorg-video-abi-13 2013-09-19 05:32:58,686 DEBUG: Intel Cedarview graphics driver not available 2013-09-19 05:32:58,687 DEBUG: loading custom handler /usr/share/jockey/handlers/vmware-client.py 2013-09-19 05:32:58,716 WARNING: modinfo for module vmxnet failed: ERROR: modinfo: could not find module vmxnet 2013-09-19 05:32:58,717 DEBUG: Instantiated Handler subclass __builtin__.VmwareClientHandler from name VmwareClientHandler 2013-09-19 05:32:58,758 DEBUG: VMWare Client Tools availability undetermined, adding to pool 2013-09-19 05:32:58,758 DEBUG: loading custom handler /usr/share/jockey/handlers/nvidia.py 2013-09-19 05:32:58,826 WARNING: modinfo for module nvidia_304 failed: ERROR: modinfo: could not find module nvidia_304 2013-09-19 05:32:58,836 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver304 from name NvidiaDriver304 2013-09-19 05:32:58,837 DEBUG: nvidia.available: falling back to default 2013-09-19 05:33:11,682 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2013-09-19 05:33:11,688 WARNING: modinfo for module nvidia_304_updates failed: ERROR: modinfo: could not find module nvidia_304_updates 2013-09-19 05:33:11,696 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver304Updates from name NvidiaDriver304Updates 2013-09-19 05:33:11,696 DEBUG: nvidia.available: falling back to default 2013-09-19 05:33:24,326 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2013-09-19 05:33:24,332 WARNING: modinfo for module nvidia_current_updates failed: ERROR: modinfo: could not find module nvidia_current_updates 2013-09-19 05:33:24,339 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrentUpdates from name NvidiaDriverCurrentUpdates 2013-09-19 05:33:24,340 DEBUG: nvidia.available: falling back to default 2013-09-19 05:33:24,381 DEBUG: NVIDIA accelerated graphics driver (post-release updates) not available 2013-09-19 05:33:24,387 WARNING: modinfo for module nvidia_experimental_304 failed: ERROR: modinfo: could not find module nvidia_experimental_304 2013-09-19 05:33:24,427 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverExperimental304 from name NvidiaDriverExperimental304 2013-09-19 05:33:24,427 DEBUG: nvidia.available: falling back to default 2013-09-19 05:33:24,461 DEBUG: NVIDIA accelerated graphics driver (**experimental** beta) not available 2013-09-19 05:33:24,467 WARNING: modinfo for module nvidia_current failed: ERROR: modinfo: could not find module nvidia_current

    Read the article

  • Where and how to mention Stackoverflow participation in the résumé?

    - by Sandeepan Nath
    I think I have good enough reputation on SO now - here is my profile - http://stackoverflow.com/users/351903/sandeepan-nath. Well, this may not be that much as compared to so many other users out there but I am happy with mine. So, I was thinking of adding my profile link on my résumé. (Just the profile link and not that "I have this much reputation on SO"). Those who haven't seen, can see this question Would you put your stackoverflow profile link on your CV / Resume?. How would this look like? Forums/Blogs/Miscellaneous others No blogging as yet but active participant in Stackoverflow. My profile link - http://stackoverflow.com/users/351903/sandeepan-nath I think of putting this section after Project Details and Technical Expertise sections. Any tips/advice? Thanks Update MKO has made a very good point - "do you really want a potential employeer to be able to evaluate in detail everything you've ever written on SO". I thought of commenting but it would be too long - In my questions/answers I put a lot of statements like - "AFAIK ...", "following are my assumptions so far ...", "am I correct to conclude that... ?", "I doubt if it is possible to ..." etc. when I am not sure about something and I rarely involve in fights with other users. However I do argue on topics sometimes if I feel it is necessary and if I have a valid point. I do accept my mistakes and apologize for the same. As we all know nobody is perfect. I must have written many things which may be judged as wrong by a potential employer. But what if the same employer notices that I have improved in the quality of content by comparing old content with new one? Isn't that great? I also try to go back to older questions/answers and put corrective comments etc. when I feel I was wrong or if I can improve my post. Of course there are many employers who want you (potential employees) to be correct each and every time. They immediately remove you from consideration when you say a single incorrect thing. I have personally met such an interviewer few months back. He didn't even care to listen to any good thing I had done after he found a single wrong thing. Now the question is do you really care to work with such people? Or do you like those people who give value to the fact that you are striving to improve every day. I personally prefer the latter.

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • How to show pending messages using WLST?

    - by lmestre
    Here are the steps: 1. . ./setDomainEnv.sh2. java weblogic.WLST3. connect('weblogic','welcome1','t3://localhost:7001')4. domainRuntime()5. cd('ServerRuntimes/MS1/JMSRuntime/MS1.jms/JMSServers/JMSServer1/Destinations/JMSModule1!Queue1')6. cursor1=cmo.getMessages('true',9999999,10)                                                 **String(selector),Integer(timeout),Integer(state)7. msgs = cmo.getNext(cursor1, 10)                  ** This step gets 10 messages, you can call again cmo.getNext(cursor1, 10) to get the next 10 msgs8. print(msgs)My assumption, is that you had created:a. Managed Server MS1.b. JMS Server JMSServer1.c. Module called JMSModule1.d. Inside of JMSModule1, a Queue called Queue1.If you read my previous post:How to get Messages Pending Count from a Queue using WLST? https://blogs.oracle.com/LuzMestre/entry/how_to_get_messages_pendingYou can see that both are very similar.  Sometimes it is difficult to get a WLST Script sample, but you can use ls() function to know about other functionalities you don't have a sample code.***Until step 5, nothing new comparing to my previous post.5. cd('ServerRuntimes/MS1/JMSRuntime/MS1.jms/JMSServers/JMSServer1/Destinations/JMSModule1!Queue1')6. ls()You will see, MessagesPendingCount, getMessages along a lot of other functionalities available in this Queue. e.g, you can see:-r-x   getMessages                                  String : String(selector),Integer(timeout),Integer(state)Here you can check the complete MBean Reference:http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13951/core/index.htmlSee JMSDestinationRuntimeMBean.Enjoy!

    Read the article

  • Use Expressions with LINQ to Entities

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Recently I've been putting together a generic approach for paging the response from a WCF service. Paging changes the service signature, so it's not as simple as adding a behavior to an existing service in config, but the complexity of the paging is isolated in a generic base class. We're using the Entity Framework talking to SQL Server, so when we ask for a page using LINQ's .Take() method we get a nice efficient SQL query for just the rows we want, with minimal impact on SQL Server and network traffic. We use the maximum ID of the record returned as a high-water mark (rather than using .Skip() to go to the next record), so the approach caters for records being deleted between page requests. In the paged response we include a HasMorePages indicator, computed by comparing the max ID in the page of results to the max ID for the whole resultset - if the latter is bigger, then there are more pages. In some quick performance testing, the paged version of the service performed much more slowly than the unpaged version, which was unexpected. We narrowed it down to the code which gets the max ID for the full resultset - instead of building an efficient MAX() SQL query, EF was returning the whole resultset and then computing the max ID in the service layer. It's easy to reproduce - take this AdventureWorks query:             var context = new AdventureWorksEntities();             var query = from od in context.SalesOrderDetail                         where od.ModifiedDate >= modified                          && od.SalesOrderDetailID.CompareTo(id) > 0                         orderby od.SalesOrderDetailID                         select od;   We can find the maximum SalesOrderDetailID like this:             var maxIdEfficiently = query.Max(od => od.SalesOrderDetailID);   which produces our efficient MAX() SQL query. If we're doing this generically and we already have the ID function in a Func:             Func<SalesOrderDetail, int> idFunc = od => od.SalesOrderDetailID;             var maxIdInefficiently = query.Max(idFunc);   This fetches all the results from the query and then runs the Max() function in code. If you look at the difference in Reflector, the first call passes an Expression to the Max(), while the second call passes a Func. So it's an easy fix - wrap the Func in an Expression:             Expression<Func<SalesOrderDetail, int>> idExpression = od => od.SalesOrderDetailID;             var maxIdEfficientlyAgain = query.Max(idExpression);   - and we're back to running an efficient MAX() statement. Evidently the EF provider can dissect an Expression and build its equivalent in SQL, but it can't do that with Funcs.

    Read the article

  • Function like C# properties?

    - by alan2here
    I was directed here from SO as a better stack exchange site for this question. I've been thinking about the neatness and expression of C# properties over functions, although they only currently work where no parameters are used, and wondered. Is is possible, and if so why not, to have a stand alone function like C# property. For example: public class test { private byte n = 4; public test() { func = 2; byte n2 = func; func; } private byte func { get { return n; } set { n = value; } func { n++; } } } edit: Sorry for the vagueness first time round. I'm going to add some info and motivation. The 'n++' here is just a simple example, a placeholder, it's not intended to be representative of the actual code that would be used. I'm also looking at this from the point of view of looking at the property command as is, not in the context of using it for 'get_xyz' and 'set_xyz' member functions, which is certainly useful, but of instead comparing it more abstractly to functions and other programic elements. A 'get' property can be used instead of a function that takes no parameters, and syntactically they are perhaps only aesthetically, but as I see it noticeably nicer. However, properties also add the potential for an extra layer of polymorphism, one that relates to the 'func = 4;' getting, 'int n = func;' setting or 'func;' function like context in which they are used as well as the more common parameter based polymorphism. Potentially allowing for a lot of expression and contextual information reguarding how other would use your functions. As in many places uses and definitions would remain the same, it shouldn't break existing code. private byte func { get { } get bool { } set { } func { } func(bool) { } func(byte, myType) { } // etc... } So a read only function would look like this: private byte func { get { } } A normal function like this: private void func { func { } } A function with parameter polymorphism like this: private byte func { func(bool) { } func(byte, myType) { } } And a function that could return a value, or just compute, depending on the context it is used, that also has more conventional parameter polymorphism as well, like so: private byte func { get { } func(bool) { } func(byte, myType) { } }

    Read the article

  • Oracle Virtualization Friday Spotlight - November 8, 2013

    - by Monica Kumar
    Hands-on Private Cloud Simulator In One Hour Submitted by: Doan Nguyen, Senior Principal Product Marketing Director My aeronautics instructor used to say, "you can’t appreciate flying until you take flight." To clarify, this is not about gearing up in a flying squirrel suit and hopping off a cliff (topic for another blog!) but rather about flying an airplane. The idea is to get hands-on with the controls at the cockpit and experience flight before you actually fly a real plane. After the initial 40 hours of flight time, the concept sank in and it really made sense.This concept is what inspired our technical experts to put together the hands-on lab for a private cloud deployment and management self-service model. Yes, we are comparing the lab to a flight simulator! Let’s look at the parallels: To get trained to fly, starting in the simulator gets you off the ground quicker. There is no need to have a real plane to begin with. In a hands-on lab, there is no need for a real server, with networking and real storage installed. All you need is your laptop The simulator is pre-configured, pre-flight check done. Similarly, in a hands-on lab, Oracle VM and Oracle Enterprise Manager are pre-configured and assembled using Oracle VM VirtualBox as the container. Software installations are not needed. After time spent training at the controls, you can really appreciate the practical experience of flying. Along the same lines, the hands-on lab is a guided learning path, without the encumbrances of hardware, software installation, so you can learn about cloud deployment and management.  However, unlike the simulator training, your time investment with the lab is only about an hour and not 40 hours! This hands-on lab takes you through private cloud deployment and management using Oracle VM and  Oracle Enterprise Manager Cloud Control 12c in an Infrastructure as a service IaaS model. You will first configure the IaaS cloud as the cloud administrator and then deploy guest virtual machines (VMs) as a self-service user. Then you are ready to take flight into the cloud! Why not step into the cockpit now!

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • Why can't the IT industry deliver large, faultless projects quickly as in other industries?

    - by MainMa
    After watching National Geographic's MegaStructures series, I was surprised how fast large projects are completed. Once the preliminary work (design, specifications, etc.) is done on paper, the realization itself of huge projects take just a few years or sometimes a few months. For example, Airbus A380 "formally launched on Dec. 19, 2000", and "in the Early March, 2005", the aircraft was already tested. The same goes for huge oil tankers, skyscrapers, etc. Comparing this to the delays in software industry, I can't help wondering why most IT projects are so slow, or more precisely, why they cannot be as fast and faultless, at the same scale, given enough people? Projects such as the Airbus A380 present both: Major unforeseen risks: while this is not the first aircraft built, it still pushes the limits if the technology and things which worked well for smaller airliners may not work for the larger one due to physical constraints; in the same way, new technologies are used which were not used yet, because for example they were not available in 1969 when Boeing 747 was done. Risks related to human resources and management in general: people quitting in the middle of the project, inability to reach a person because she's on vacation, ordinary human errors, etc. With those risks, people still achieve projects like those large airliners in a very short period of time, and despite the delivery delays, those projects are still hugely successful and of a high quality. When it comes to software development, the projects are hardly as large and complicated as an airliner (both technically and in terms of management), and have slightly less unforeseen risks from the real world. Still, most IT projects are slow and late, and adding more developers to the project is not a solution (going from a team of ten developer to two thousand will sometimes allow to deliver the project faster, sometimes not, and sometimes will only harm the project and increase the risk of not finishing it at all). Those which are still delivered may often contain a lot of bugs, requiring consecutive service packs and regular updates (imagine "installing updates" on every Airbus A380 twice per week to patch the bugs in the original product and prevent the aircraft from crashing). How can such differences be explained? Is it due exclusively to the fact that software development industry is too young to be able to manage thousands of people on a single project in order to deliver large scale, nearly faultless products very fast?

    Read the article

  • If most of team can't follow the architecture, what do you do?

    - by Chris
    Hi all, I'm working on a greenfields project with two other developers. We're all contractors, and myself and one other just started working on the project while the orginal one has been doing most of the basic framework coding. In the past month, my fellow programmer and I have been just frustrated by the design descisions done by our co-worker. Here's a little background information: The application at face value appeared to be your standard n-layered web application using C# on the 3.5 framework. We have a data layer, business layer and a web interface. But as we got deeper into the project we found some very interesting things that have caused us some troubles. There is a custom data access sqlHelper type base which only accepts dictionary key/valued entries and returns only data tables. There are no entity objects, but there are some massive objects which do everything and then are tossed into session for persitance. The general idea is that the pages (.aspx) don't do anything, while the controls (.ascx) do everything. The general flow is that a client clicks on a button, which goes to a user control base which passes a process request to the 'BLL' class which goes to the page processor, which then goes to a getControlProcessor, which at last actually processes the request. The request itself is made up of a dictionary which is passing a string valued method name, stored procedure name, a control name and possibly a value. All switching of the processing is done by comparing the string values of the control names and method names. Pages are linked together via a common header control that uses a combination of javascript and tables to create a hyperlink effect. And as I found out yesterday, a simple hyperlink between one page and another does not work because of the need to have quite a bit of information in session to determine which control to display on a page. My fellow programmer and I both believe that this is a strange and uncommon approach to web application development. Both of us have been in this business for over five years and neither of us have seen this approach. My question is this, how would we approach our co-worker and voice our concerns and what should we do if he does not want to accept the criteic? We both do not want to insult the work that has been done, but feel that going forward will create a nightmare for development. Thanks for your comments.

    Read the article

  • Efficient way to find unique elements in a vector compared against multiple vectors

    - by SyncMaster
    I am trying find the number of unique elements in a vector compared against multiple vectors using C++. Suppose I have, v1: 5, 8, 13, 16, 20 v2: 2, 4, 6, 8 v3: 20 v4: 1, 2, 3, 4, 5, 6, 7 v5: 1, 3, 5, 7, 11, 13, 15 The number of unique elements in v1 is 1 (i.e. number 16). I tried two approaches. Added vectors v2,v3,v4 and v5 into a vector of vector. For each element in v1, checked if the element is present in any of the other vectors. Combined all the vectors v2,v3,v4 and v5 using merge sort into a single vector and compared it against v1 to find the unique elements. Note: sample_vector = v1 and all_vectors_merged contains v2,v3,v4,v5 //Method 1 unsigned int compute_unique_elements_1(vector<unsigned int> sample_vector,vector<vector<unsigned int> > all_vectors_merged) { unsigned int duplicate = 0; for (unsigned int i = 0; i < sample_vector.size(); i++) { for (unsigned int j = 0; j < all_vectors_merged.size(); j++) { if (std::find(all_vectors_merged.at(j).begin(), all_vectors_merged.at(j).end(), sample_vector.at(i)) != all_vectors_merged.at(j).end()) { duplicate++; } } } return sample_vector.size()-duplicate; } // Method 2 unsigned int compute_unique_elements_2(vector<unsigned int> sample_vector, vector<unsigned int> all_vectors_merged) { unsigned int unique = 0; unsigned int i = 0, j = 0; while (i < sample_vector.size() && j < all_vectors_merged.size()) { if (sample_vector.at(i) > all_vectors_merged.at(j)) { j++; } else if (sample_vector.at(i) < all_vectors_merged.at(j)) { i++; unique ++; } else { i++; j++; } } if (i < sample_vector.size()) { unique += sample_vector.size() - i; } return unique; } Of these two techniques, I see that Method 2 gives faster results. 1) Method 1: Is there a more efficient way to find the elements than running std::find on all the vectors for all the elements in v1. 2) Method 2: Extra overhead in comparing vectors v2,v3,v4,v5 and sorting them. How can I do this in a better way?

    Read the article

  • How to find virtualization performance bottlenecks?

    - by Martin
    We have recently started moving our C++ build server(s) from real machines into VMs. (MS Hyper-V) We have some performance issues that I've currently no idea how to address. We have: Test-Box - this is a piece of desktop workstation hardware my co-worker used to set up the VM before we moved it to the actual server hardware Srv-Box - this is the server hardware Test-Box-Real - This is Windows running directly on the Test-Box HW Test-Box-VM - This is Windows in a Hyper-V VM on the Test-Box HW Srv-Box-Real- This is Server2008R2 running on the Srv-Box HW. Srv-Box-VM- This is Windows running in a Hyper-V VM on the Srv-Box HW, i.e. on Srv-Box-Real. Now, the problem is that we compared Build times between Test-Box-Real and Test-Box-VM and they were basically equal (within about 2%). Then we moved the VM to the Srv-Box machine and what we saw there is that we have a significant performance degradation between Srv-Box-Real and Srv-Box-VM, that is, where we saw no differences on the Test HW we now do see major differences in performance on the actual Server HW. (Builds about ~~ 50% slower inside the VM.) I should add that both the Test-Box and the Srv-Box are only running this one single VM and doing nothing else. I should also note that the "Real" OS is Win2008R2(64bit) and the VM hosted OS is Wind2003R2(32bit). Hardware specs: Srv-Box: Intel XEON E5640 @ 2.67Ghz (This means 8 cores with hyperthreading on the Real system and "only" 4 cores on the VM, since Hyper-V doesn't allow for hyperthreading, but number of cores doesn't seem to explain the problem here.) 16GB RAM (we have 4GB assigned to the VM) Virtual DELL RAID 1 (2x 450GB HUS156045VLS600 Hitachi 15k SAS drives) Test-Box: Intel XEON E31245 @ 3.3GHz 16GB RAM WD VelociRaptor 600GB 10k RPM SATA Note again that I'm only concerned with the differences between Srv-Box-Real and Srv-Box-VM (high) vs. the differences seen btw. Test-Box-Real and Test-Box-VM (low). Why would one machine have parity when comparing VM vs Real performance and the other (server grade HW no less) would have a large disparity? (Both being XEON CPUs ...)

    Read the article

  • How to make 7zip faster

    - by user34463
    I normally use winRAR over 7Zip simply because its faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7zip and winRAR default settings on their normal compression and their best compression, and in a lot of cases winRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7zip speed up? I'd like it to at least be on par with rar's speed Is there a way to make recovery segments in 7zip like you can in rar? I didn't see any, but I guess it could be a command line thing. I tested winrar and 7zip using the latest stable version of each (4.something with 7zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core intel i7 720 (1.6ghz)/(2.8ghz) with 4gb DDR3 ram, and the 64-bit version of 7zip.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >