Search Results

Search found 7209 results on 289 pages for 'names'.

Page 199/289 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • What Every Developer Should Know About MSI Components

    - by Alois Kraus
    Hopefully nothing. But if you have to do more than simple XCopy deployment and you need to support updates, upgrades and perhaps side by side scenarios there is no way around MSI. You can create Msi files with a Visual Studio Setup project which is severely limited or you can use the Windows Installer Toolset. I cannot talk about WIX with my German colleagues because WIX has a very special meaning. It is funny to always use the long name when I talk about deployment possibilities. Alternatively you can buy commercial tools which help you to author Msi files but I am not sure how good they are. Given enough pain with existing solutions you can also learn the MSI Apis and create your own packaging solution. If I were you I would use either a commercial visual tool when you do easy deployments or use the free Windows Installer Toolset. Once you know the WIX schema you can create well formed wix xml files easily with any editor. Then you can “compile” from the wxs files your Msi package. Recently I had the “pleasure” to get my hands dirty with C++ (again) and the MSI technology. Installation is a complex topic but after several month of digging into arcane MSI issues I can safely say that there should exist an easier way to install and update files as today. I am not alone with this statement as John Robbins (creator of the cool tool Paraffin) states: “.. It's a brittle and scary API in Windows …”. To help other people struggling with installation issues I present you the advice I (and others) found useful and what will happen if you ignore this advice. What is a MSI file? A MSI file is basically a database with tables which reference each other to control how your un/installation should work. The basic idea is that you declare via these tables what you want to install and MSI controls the how to get your stuff onto or off your machine. Your “stuff” consists usually of files, registry keys, shortcuts and environment variables. Therefore the most important tables are File, Registry, Environment and Shortcut table which define what will be un/installed. The key to master MSI is that every resource (file, registry key ,…) is associated with a MSI component. The actual payload consists of compressed files in the CAB format which can either be embedded into the MSI file or reside beside the MSI file or in a subdirectory below it. To examine MSI files you need Orca a free MSI editor provided by MS. There is also another free editor called Super Orca which does support diffs between MSI and it does not lock the MSI files. But since Orca comes with a shell extension I tend to use only Orca because it is so easy to right click on a MSI file and open it with this tool. How Do I Install It? Double click it. This does work for fresh installations as well as major upgrades. Updates need to be installed via the command line via msiexec /i <msi> REINSTALL=ALL REINSTALLMODE=vomus   This tells the installer to reinstall all already installed features (new features will NOT be installed). The reinstallmode letters do force an overwrite of the old cached package in the %WINDIR%\Installer folder. All files, shortcuts and registry keys are redeployed if they are missing or need to be replaced with a newer version. When things did go really wrong and you want to overwrite everything unconditionally use REINSTALLMODE=vamus. How To Enable MSI Logs? You can download a MSI from Microsoft which installs some registry keys to enable full MSI logging. The log files can be found in your %TEMP% folder and are called MSIxxxx.log. Alternatively you can add to your msiexec command line the option msiexec …. /l*vx <LogFileName> Personally I find it rather strange that * does not mean full logging. To really get all logs I need to add v and x which is documented in the msiexec help but I still find this behavior unintuitive. What are MSI components? The whole MSI logic is bound to the concept of MSI components. Nearly every msi table has a Component column which binds an installable resource to a component. Below are the screenshots of the FeatureComponents and Component table of an example MSI. The Feature table defines basically the feature hierarchy.  To find out what belongs to a feature you need to look at the FeatureComponents table where for each feature the components are listed which will be installed when a feature is installed. The MSI components are defined in the  Component table. This table has as first column the component name and as second column the component id which is a GUID. All resources you want to install belong to a MSI component. Therefore nearly all MSI tables have a Component_ column which contains the component name. If you look e.g. a the File table you see that every file belongs to a component which is true for all other tables which install resources. The component table is the glue between all other tables which contain the resources you want to install. So far so easy. Why is MSI then so complex? Most MSI problems arise from the fact that you did violate a MSI component rule in one or the other way. When you install a feature the reference count for all components belonging to this feature will increase by one. If your component is installed by more than one feature it will get a higher refcount. When you uninstall a feature its refcount will drop by one. Interesting things happen if the component reference count reaches zero: Then all associated resources will be deleted. That looks like a reasonable thing and it is. What it makes complex are the strange component rules you have to follow. Below are some important component rules from the Tao of the Windows Installer … Rule 16: Follow Component Rules Components are a very important part of the Installer technology. They are the means whereby the Installer manages the resources that make up your application. The SDK provides the following guidelines for creating components in your package: Never create two components that install a resource under the same name and target location. If a resource must be duplicated in multiple components, change its name or target location in each component. This rule should be applied across applications, products, product versions, and companies. Two components must not have the same key path file. This is a consequence of the previous rule. The key path value points to a particular file or folder belonging to the component that the installer uses to detect the component. If two components had the same key path file, the installer would be unable to distinguish which component is installed. Two components however may share a key path folder. Do not create a version of a component that is incompatible with all previous versions of the component. This rule should be applied across applications, products, product versions, and companies. Do not create components containing resources that will need to be installed into more than one directory on the user’s system. The installer installs all of the resources in a component into the same directory. It is not possible to install some resources into subdirectories. Do not include more than one COM server per component. If a component contains a COM server, this must be the key path for the component. Do not specify more than one file per component as a target for the Start menu or a Desktop shortcut. … And these rules do not even talk about component ids, update packages and upgrades which you need to understand as well. Lets suppose you install two MSIs (MSI1 and MSI2) which have the same ComponentId but different component names. Both do install the same file. What will happen when you uninstall MSI2?   Hm the file should stay there. But the component names are different. Yes and yes. But MSI uses not use the component name as key for the refcount. Instead the ComponentId column of the Component table which contains a GUID is used as identifier under which the refcount is stored. The components Comp1 and Comp2 are identical from the MSI perspective. After the installation of both MSIs the Component with the Id {100000….} has a refcount of two. After uninstallation of one MSI there is still a refcount of one which drops to zero just as expected when we uninstall the last msi. Then the file which was the same for both MSIs is deleted. You should remember that MSI keeps a refcount across MSIs for components with the same component id. MSI does manage components not the resources you did install. The resources associated with a component are then and only then deleted when the refcount of the component reaches zero.   The dependencies between features, components and resources can be described as relations. m,k are numbers >= 1, n can be 0. Inside a MSI the following relations are valid Feature    1  –> n Components Component    1 –> m Features Component      1  –>  k Resources These relations express that one feature can install several components and features can share components between them. Every (meaningful) component will install at least one resource which means that its name (primary key to stay in database speak) does occur in some other table in the Component column as value which installs some resource. Lets make it clear with an example. We want to install with the feature MainFeature some files a registry key and a shortcut. We can then create components Comp1..3 which are referenced by the resources defined in the corresponding tables.   Feature Component Registry File Shortcuts MainFeature Comp1 RegistryKey1     MainFeature Comp2   File.txt   MainFeature Comp3   File2.txt Shortcut to File2.txt   It is illegal that the same resource is part of more than one component since this would break the refcount mechanism. Lets illustrate this:            Feature ComponentId Resource Reference Count Feature1 {1000-…} File1.txt 1 Feature2 {2000-….} File1.txt 1 The installation part works well but what happens when you uninstall Feature2? Component {20000…} gets a refcount of zero where MSI deletes all resources belonging to this component. In this case File1.txt will be deleted. But Feature1 still has another component {10000…} with a refcount of one which means that the file was deleted too early. You just have ruined your installation. To fix it you then need to click on the Repair button under Add/Remove Programs to let MSI reinstall any missing registry keys, files or shortcuts. The vigilant reader might has noticed that there is more in the Component table. Beside its name and GUID it has also an installation directory, attributes and a KeyPath. The KeyPath is a reference to a file or registry key which is used to detect if the component is already installed. This becomes important when you repair or uninstall a component. To find out if the component is already installed MSI checks if the registry key or file referenced by the KeyPath property does exist. When it does not exist it assumes that it was either already uninstalled (can lead to problems during uninstall) or that it is already installed and all is fine. Why is this detail so important? Lets put all files into one component. The KeyPath should be then one of the files of your component to check if it was installed or not. When your installation becomes corrupt because a file was deleted you cannot repair it with the Repair button under Add/Remove Programs because MSI checks the component integrity via the Resource referenced by its KeyPath. As long as you did not delete the KeyPath file MSI thinks all resources with your component are installed and never executes any repair action. You get even more trouble when you try to remove files during an upgrade (you cannot remove files during an update) from your super component which contains all files. The only way out and therefore best practice is to assign for every resource you want to install an extra component. This ensures painless updatability and repairs and you have much less effort to remove specific files during an upgrade. In effect you get this best practice relation Feature 1  –> n Components Component   1  –>  1 Resources MSI Component Rules Rule 1 – One component per resource Every resource you want to install (file, registry key, value, environment value, shortcut, directory, …) must get its own component which does never change between versions as long as the install location is the same. Penalty If you add more than one resources to a component you will break the repair capability of MSI because the KeyPath is used to check if the component needs repair. MSI ComponentId Files MSI 1.0 {1000} File1-5 MSI 2.0 {2000} File2-5 You want to remove File1 in version 2.0 of your MSI. Since you want to keep the other files you create a new component and add them there. MSI will delete all files if the component refcount of {1000} drops to zero. The files you want to keep are added to the new component {2000}. Ok that does work if your upgrade does uninstall the old MSI first. This will cause the refcount of all previously installed components to reach zero which means that all files present in version 1.0 are deleted. But there is a faster way to perform your upgrade by first installing your new MSI and then remove the old one.  If you choose this upgrade path then you will loose File1-5 after your upgrade and not only File1 as intended by your new component design.   Rule 2 – Only add, never remove resources from a component If you did follow rule 1 you will not need Rule 2. You can add in a patch more resources to one component. That is ok. But you can never remove anything from it. There are tricky ways around that but I do not want to encourage bad component design. Penalty Lets assume you have 2 MSI files which install under the same component one file   MSI1 MSI2 {1000} - ComponentId {1000} – ComponentId File1.txt File2.txt   When you install and uninstall both MSIs you will end up with an installation where either File1 or File2 will be left. Why? It seems that MSI does not store the resources associated with each component in its internal database. Instead Windows will simply query the MSI that is currently uninstalled for all resources belonging to this component. Since it will find only one file and not two it will only uninstall one file. That is the main reason why you never can remove resources from a component!   Rule 3 Never Remove A Component From an Update MSI. This is the same as if you change the GUID of a component by accident for your new update package. The resulting update package will not contain all components from the previously installed package. Penalty When you remove a component from a feature MSI will set the feature state during update to Advertised and log a warning message into its log file when you did enable MSI logging. SELMGR: ComponentId '{2DCEA1BA-3E27-E222-484C-D0D66AEA4F62}' is registered to feature 'xxxxxxx, but is not present in the Component table.  Removal of components from a feature is not supported! MSI (c) (24:44) [07:53:13:436]: SELMGR: Removal of a component from a feature is not supported Advertised means that MSI treats all components of this feature as not installed. As a consequence during uninstall nothing will be removed since it is not installed! This is not only bad because uninstall does no longer work but this feature will also not get the required patches. All other features which have followed component versioning rules for update packages will be updated but the one faulty feature will not. This results in very hard to find bugs why an update was only partially successful. Things got better with Windows Installer 4.5 but you cannot rely on that nobody will use an older installer. It is a good idea to add to your update msiexec call MSIENFORCEUPGRADECOMPONENTRULES=1 which will abort the installation if you did violate this rule.

    Read the article

  • On SQL Developer and TNSNAMES.ORA

    - by thatjeffsmith
    Tnsnames.ora [DOCS] is a configuration file for SQL*Net that describes the network service names for the databases in your organization. Basically, it tells Oracle applications how to find your databases. This post is just a quick overview on how to get SQL Developer to ‘see’ this file and define a connection. There’s only a single prerequisite for having SQL Devleoper setup such that it can use TNSNAMES to connect: You have somewhere a tnsnames.ora file You don’t need a client, instant or otherwise, on your machine. You just need the file. Now, if you DO you have a client or HOME on your machine, SQL Developer will look for those and find the tnsnames file for you. IF we can’t find it at the usual places, you can simply tell us where it is via this preference: On the Database – Advanced page Once you’ve done this, assuming you have a file (or 10) in that directory, we’ll read it, parse it, and list the entries in the connection dialog. The File(s) That’s right, files. Just like SQL*Plus, we’ll read any file that starts with ‘tnsnames’ – that includes files you’ve renamed to .bak or .old. Kris talks about that more here. I have just the one, which is all I need anyway. There we go! Defining the Connection Just set the connection type to TNS. This is a lot easier to do than manually defining the connections – esp as they’re likely to frequently change in ‘the real world.’ No Client or Home Required That’s right. You don’t need an Oracle Client or $ORACLE_HOME to have SQL Developer see and read a TNS file. Just so you know I’m not cheating… SQL Dev doesn’t know which client to use and won’t use it even if it DID know… I’m able to define a new connection AND connect with these preferences ON|OFF.

    Read the article

  • Windows and SQL Azure Best Practices: Affinity Groups

    - by BuckWoody
    When you create a Windows Azure application, you’ll pick a subscription to put it under. This is a billing container - underneath that, you’ll deploy a Hosted Service. That holds the Web and Worker Roles that you’ll deploy for your applications. along side that, you use the Storage Account to create storage for the application. (In some cases, you might choose to use only storage or Roles - the info here applies anyway) As you are setting up your environment, you’re asked to pick a “region” where your application will run. If you choose a Region, you’ll be asked where to put the Roles. You’re given choices like Asia, North America and so on. This is where the hardware that physically runs your code lives. We have lots of fault domains, power considerations and so on to keep that set of datacenters running, but keep in mind that this is where the application lives. You also get this selection for Storage Accounts. When you make new storage, it’s a best practice to put it where your computing is. This makes the shortest path from the code to the data, and then back out to the user. One of the selections for the location is “Anywhere U.S.”. This selection might be interpreted to mean that we will bias towards keeping the data and the code together, but that may not be the case. There is a specific abstraction we created for just that purpose: Affinity Groups. An Affinity Group is simply a name you can use to tie together resources. You can do this in two places - when you’re creating the Hosted Service (shown above) and on it’s own tree item on the left, called “Affinity Groups”. When you select either of those actions, You’re presented with a dialog box that allows you to specify a name, and then the Region that  names ties the resources to. Now you can select that Affinity Group just as if it were a Region, and your code and data will stay together. That helps with keeping the performance high. Official Documentation: http://msdn.microsoft.com/en-us/library/windowsazure/hh531560.aspx

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Selling Solutions, Not Products

    - by David Dorf
    When I think about next-generation retailers, the names that come to mind are Apple, Whole Foods, Lulu Lemon, and IKEA.  They may not be the biggest retailers, but they are certainly growing fast. Success is never defined by just one dimension, and these retailers execute well across many dimensions, but the one that stands out for me is customer experience.  These stores feel...approachable...part of the community...local.  Customers are not intimidated to ask questions, and staff seem to go out of their way to help. What's makes these retailers stand out in the industry?  These retailers aren't selling products -- they're selling solutions.  Think about that.  You think you're going to the Apple store to buy a phone, but you're actually buying a communications solution that handles much, much more.  If you carry an iPhone, your life has changed.  The way you do things is different.  The impacts go much beyond a simple phone. Solutions start with a problem, which is why these retailers greet customers with "what brought you in today," or "can I answer any questions for you?"  Good retailers establish a relationship, even if it lasts only a few minutes. You don't walk into Whole Foods looking for cans of soup.  You are looking for meals: healthy snacks, interesting lunches, exotic dinners.  Its a learning experience where you might discover solutions to problems you didn't know you had.  Mention what foods you like, and you'll get a list of similar items you had not considered.  I didn't know I needed a closet organizer until I visited an IKEA and learned about all the options.  They were able to customize the solution to meet my needs, and now I'm much more organized. One of the differences between selling products and selling solutions is training.  Visit any of these retailers' sites and you'll see a long list of in-store events for the benefit of customers.  You can buy exercise clothing from Lulu Lemon, and also learn new yoga techniques, meet like-minded people, and branch off to other fitness regimes via their ambassadors.  You can visit the Geek Bar at Apple, eat lunch at IKEA, and learn to cook at Whole Foods. These retailers are making an investment in a relationship with their customers.  They are showing loyalty to their customers before asking for it back.  In the long-run, this strategic approach will outlive any scan-and-bag mentality.

    Read the article

  • Passthrough Objects – Duck Typing++

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Can't see a genuine use for this, but I got the idea in my head and wanted to work it through. It's an extension to the idea of duck typing, for scenarios where types have similar behaviour, but implemented in differently-named members. So you may have a set of objects you want to treat as an interface, which don't implement the interface explicitly, and don't have the same member names so they can't be duck-typed into implicitly implementing the interface. In a fictitious example, I want to call Get on whichever ICache implementation is current, and have the call passed through to the relevant method – whether it's called Read, Retrieve or whatever: A sample implementation is up on github here: PassthroughSample. This uses Castle's DynamicProxy behind the scenes in the same way as my duck typing sample, but allows you to configure the passthrough to specify how the inner (implementation) and outer (interface) members are mapped:       var setup = new Passthrough();     var cache = setup.Create("PassthroughSample.Tests.Stubs.AspNetCache, PassthroughSample.Tests")                             .WithPassthrough("Name", "CacheName")                             .WithPassthrough("Get", "Retrieve")                             .WithPassthrough("Set", "Insert")                             .As<ICache>(); - or using some ugly Lambdas to avoid the strings :     Expression<Func<ICache, string, object>> get = (o, s) => o.Get(s);     Expression<Func<Memcached, string, object>> read = (i, s) => i.Read(s);     Expression<Action<ICache, string, object>> set = (o, s, obj) => o.Set(s, obj);     Expression<Action<Memcached, string, object>> insert = (i, s, obj) => i.Put(s, obj);       ICache cache = new Passthrough<ICache, Memcached>()                     .Create()                     .WithPassthrough(o => o.Name, i => i.InstanceName)                     .WithPassthrough(get, read)                     .WithPassthrough(set, insert)                     .As();   - or even in config:   ICache cache = Passthrough.GetConfigured<ICache>(); ...  <passthrough>     <types>       <typename="PassthroughSample.Tests.Stubs.ICache, PassthroughSample.Tests"             passesThroughTo="PassthroughSample.Tests.Stubs.AppFabricCache, PassthroughSample.Tests">         <members>           <membername="Name"passesThroughTo="RegionName"/>           <membername="Get"passesThroughTo="Out"/>           <membername="Set"passesThroughTo="In"/>         </members>       </type>   Possibly useful for injecting stubs for dependencies in tests, when your application code isn't using an IoC container. Possibly it also has an alternative implementation using .NET 4.0 dynamic objects, rather than the dynamic proxy.

    Read the article

  • What was missing from the Content Strategy Forum?

    - by Roger Hart
    In April, Paris hosted the first ever Content Strategy Forum. The event's website proudly proclaims: 170 attendees, 18 nationalities, 17 speakers, 1 volcano... Content Strategy Forum 2010 rocked the world! The volcano was in Iceland, and the closest we came to rocking the world was a cursory mention in the Huffington Post, but I'll grant the event was awesome. One thing missing from that list, however, is "94 companies" (Plus a couple of universities and freelancers, and what have you). A glance through the attendees directory reveals a fairly wide organisational turnout - 24 students from two Parisian universities, countless design and marketing agencies, a series of tech firms, small and large. Two delegates from IBM, two from ARM, an appearance from RIM, Skype, and Facebook; twelve from the various bits of eBay. Oh, and, err, nobody from Google, Microsoft, Yahoo, Amazon, Play, Twitter, LinkedIn, Craigslist, the BBC, no banks I noticed, and I didn't spot a newspaper. You get the idea. Facebook notwithstanding, you have to scroll through a few pages to Alexa rankings to find company names from the attendee list. I find this interesting, and I'm not wholly sure what to make of it. Of the large, web-centric, content-rich organizations conspicuously absent, at least one of two things is true: They didn't know about the event They didn't care about the event Maybe these guys all have content strategy completely sorted, and it's an utterly naturalised part of their business process. Maybe nobody at say, Apple or Play.com ever publishes a single piece of content that isn't neatly tailored to their (clearly defined, of course) user and business goals. Wouldn't that be lovely? The thing is, in that rosy and beatific world, there's still a case for those folks to join the community. There are bound to be other perspectives, and things to learn. You see, the other thing achingly conspicuous by its absence was case studies. In her keynote address, Kristina Halvorson made the point that what content strategy really needs is some big, loud success stories. A point I'd firmly second as a content strategist working within an organisation. Sarah Cancilla's presentation on content strategy at Facebook included some very neat, specific examples, and was richer for it. It didn't hurt that the example was Facebook - you're getting impressively big numbers off base. What about the other big boys? Is there anybody out there with a perspective? Do we all just look very silly to you, fretting away over text and images and users and purposes? Is content validation and maintenance so accustomed a part of your business that calling attention to it is like sniffing the air and saying "Hmm, a lot of nitrogen about today."? And if it is, do you have any wisdom to share?

    Read the article

  • Solution 6 : Kill a Non-Clustered Process during Two-Node Cluster Failover

    - by StanleyGu
    Using Visual Studio 2008 and C#, I developed a windows service A and deployed it to two nodes of a windows server 2008 failover cluster. The service A is part of the failover cluster service, which means, when failover occurs at node1, the cluster service will failover the windows service A from node 1 to node 2. One of the tasks implemented by the windows service A is to start, monitor or kill a process B. The process B is installed to the two nodes but is not part of the failover cluster service. When a failover occurs at node1, the cluster service does not failover the process B from node 1 to node 2, and the process B continues running at node1. The requirement is: When failover occurs at node1, we want the process B running at node1 gets killed, but we do not want the process B be part of the failover cluster service. The first idea that pops up immediately is to put some code in an event handler triggered by the failover in the windows service A. The failover effect to the windows service A is similar to using the task manager to kill the process of the windows service A, but there is no event in windows service that can be triggered by killing the process of the window service. The events related to terminating a windows service are OnStop and OnShutDown, but killing the process of windows service A triggers neither of them. The OnStop event can only be triggered by stopping the windows service using Services Control Manager or Services Management Console. Apparently, the first idea is not feasible. The second idea that emerges is to put code into the OnStart event handler of the windows service A. When failover occurs at node 1, the windows service A is killed at node 1 and started at node 2. During the starting, the windows service A at node 2 kills the process B that is running at node 1. It is a workaround and works very well. The C# code implementation within the OnStart event handler is as following: 1.       Capture server names of the two nodes from App.config 2.       Determine server name of the remote node. 3.       Kill the process B running on the remote node. Check here for sample code.  

    Read the article

  • MVVM Light Toolkit V3 SP1 for Windows Phone 7

    - by Laurent Bugnion
    He he I start to sound like Microsoft… Anyway… I just released a service pack (SP1) for MVVM Light Toolkit V3. Why? Well mostly because I worked a bit more with the Windows Phone 7 tools that were released at MIX0, and I noticed a few things that could be better in the Windows Phone 7 template. Also, I only found out at MIX that you can actually install custom project templates for Visual Studio Express. For some reason I thought it was not possible. The best way to solve these issues is through a service pack, which consists of a few zip files. Simply follow the instructions on the “Installing Manually” page. You can go ahead and overwrite the files that were installed with V3, all the file structure and names are exactly the same. What? So what do you get in this service pack that was not already in V3? (for more info about what’s new in V3, check the What’s New page). Project and Item templates for Visual Studio 10 Express (phone edition). Unzip these files in your “My Documents” folder, and you can now create a new MVVM Light application in the WinPhone7 version of Visual Studio 2010 Express. Signed assemblies: All the assemblies are now signed, which is a requirement in certain build configurations. XML documentation files: Thanks to Matt Casto for pinging me and reminding me that I had forgotten to include them (doh). New and improved Windows Phone 7 assemblies and templates: This one deserves its own section (see below). What was wrong with the old Silverlight 3 assemblies in Windows Phone 7 projects? It was kind of weird. Functionality wise, it was working just right. However, if you noticed, the EventToCommand behavior was not visible in the Assets tab of Expression Blend, under Behaviors, where it should normally have been. The reason was that even though the Windows Phone 7 is using Silverlight 3, the System.Windows.Interactivity that Blend was expecting is the version that is normally used in Silverlight 4. Yeah, I know, it’s weird. This led me to create a specific version of these assemblies for the phone. The assemblies are located into C:\Program Files\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\Binaries\WP7. There are 3 DLLs: GalaSoft.MvvmLight.WP7.dll with RelayCommand, Messenger and ViewModelBase GalaSoft.MvvmLight.Extras.WP7.dll with EventToCommand and DispatcherHelper System.Windows.Interactivity.dll which is the same DLL installed in the Blend SDK, and which is needed for the EventToCommand behavior to work. Happy coding! That’s all! Download and install the service pack according to the instructions on the Installation page, and create your first MVVM Light application for the phone (a blog post will follow later with more details).   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Autoscaling in a modern world&hellip;. Part 3

    - by Steve Loethen
    The Wasabi Hands on Labs give you a good look at the basic mechanics, but I don’t find the setup too practical.  Using a local console application to host the Autoscaler and rules files is probably the (IMHO) least likely architecture.  Far more common would be hosting in a service on premise (if you want to have the Autoscaler local) or most likely, host it in a Azure role of it’s own.  I chose to go the Azure route. First step was to get the rules.xml and the services.xml files into the cloud.  I tend to be a “one step at a time” sort of guy, so running the console application with the rules sitting in a Azure hosted set of blobs seemed to be the logical first step.  Here are the steps: 1) Create a container in the storage account you wish to use.  Name does not matter, you will get a chance to set the container name (as well as the file names) in the app.config 2) Copy the two files from where you created them to your  container.  I used the same files I had locally.  I made the container public to eliminate security issues, but in the final application, a bit of security needs to be applied (one problem at a time).  The content type was set to text/xml.  I found one reference claiming the importance of this step, and it makes sense. 3) Adjust the app.config to set the location of the files.  This will let you set all the storage account and key information needed to reach into the cloud form your console application.  The sections of your app.config will look like this: <rulesStores> <add name="Blob Rules Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.Rules.Configuration.BlobXmlFileRulesStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="rules.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </rulesStores> <serviceInformationStores> <add name="Blob Service Information Store" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling.ServiceModel.Configuration.BlobXmlFileServiceInformationStore, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.Autoscaling, Version=5.0.1118.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" blobContainerName="[ContainerName]" blobName="services.xml" storageAccount="DefaultEndpointsProtocol=https;AccountName=[StorageAccount];AccountKey=[AccountKey]" monitoringRate="00:00:30" certificateThumbprint="" certificateStoreLocation="LocalMachine" checkCertificateValidity="false" /> </serviceInformationStores> Once I had the files up in the sky, I renamed the local copies to just to make my self feel better about the application using the correct set of rules and services.  Deploy the web role to the cloud.  Once it is up and running, start the console application.  You should find the application scales up and down in response to the buttons on the web site.  Tune in next time for moving the hosting of the Autoscaler to a worker role, discussions on getting the logging information into diagnostics into storage, and a set of discussions about certs and how they play a role.

    Read the article

  • Network Solutions Gold VIP Program

    - by GGBlogger
    Today I received an email advising me “Congratulations! You have been selected for the Network Solutions Gold VIP Program” Now I get a bunch of messages from Network Solutions because a long time ago I chose them as my registrar. I usually just save these to my Outlook Network Solutions folder and move on. This time my wife was in the room and I chuckled and said something about “Network Solutions has made me a Gold VIP member.” She said “So what does that mean?” Prompted by that I scrolled down the page to look at my “perks". Let’s see – Special pricing, 1 year free Web Forwarding, 1 hour of free support… etc etc until I hit “Enrollment in SafeRenew* SafeRenew* simplifies your renewal process. It protects your domain name registrations and corresponding services in the event you forget or are unable to renew on time. I’m thinking “Now ain’t that neat” I don’t use auto renew anyway and never have. Then I continued reading: Beginning June 24th, 2012 your domains* (that’s 10 days away) will automatically renew. To ensure continuation of service, please be certain you have a valid credit card on file. If you do not wish for your services to be automatically renewed, please click here to log into account manager and opt out of the SafeRenew™ service. Whoa Network Solutions is going to start spending my money without so much as a by your leave sir!!! I don’t bloody think so and how truly magnanimous of them I can “click here” to opt out of what I didn’t opt in for. Talk about furious! I still am. Now the kicker is: if my wife hadn’t been curious and if I’d had a working credit card on file I wouldn’t have know this until one of my unwanted domain names auto renewed. Of course I was told “Oh – we’d have refunded your money” to which I say bull. In my view Networks Solutions is guilty of a crime of some sort. They DO NOT have a right to spend my money without asking!!!!! So watch out folks.

    Read the article

  • Plan Operator Tuesday round-up

    - by Rob Farley
    Eighteen posts for T-SQL Tuesday #43 this month, discussing Plan Operators. I put them together and made the following clickable plan. It’s 1000px wide, so I hope you have a monitor wide enough. Let me explain this plan for you (people’s names are the links to the articles on their blogs – the same links as in the plan above). It was clearly a SELECT statement. Wayne Sheffield (@dbawayne) wrote about that, so we start with a SELECT physical operator, leveraging the logical operator Wayne Sheffield. The SELECT operator calls the Paul White operator, discussed by Jason Brimhall (@sqlrnnr) in his post. The Paul White operator is quite remarkable, and can consume three streams of data. Let’s look at those streams. The first pulls data from a Table Scan – Boris Hristov (@borishristov)’s post – using parallel threads (Bradley Ball – @sqlballs) that pull the data eagerly through a Table Spool (Oliver Asmus – @oliverasmus). A scalar operation is also performed on it, thanks to Jeffrey Verheul (@devjef)’s Compute Scalar operator. The second stream of data applies Evil (I figured that must mean a procedural TVF, but could’ve been anything), courtesy of Jason Strate (@stratesql). It performs this Evil on the merging of parallel streams (Steve Jones – @way0utwest), which suck data out of a Switch (Paul White – @sql_kiwi). This Switch operator is consuming data from up to four lookups, thanks to Kalen Delaney (@sqlqueen), Rick Krueger (@dataogre), Mickey Stuewe (@sqlmickey) and Kathi Kellenberger (@auntkathi). Unfortunately Kathi’s name is a bit long and has been truncated, just like in real plans. The last stream performs a join of two others via a Nested Loop (Matan Yungman – @matanyungman). One pulls data from a Spool (my post – @rob_farley) populated from a Table Scan (Jon Morisi). The other applies a catchall operator (the catchall is because Tamera Clark (@tameraclark) didn’t specify any particular operator, and a catchall is what gets shown when SSMS doesn’t know what to show. Surprisingly, it’s showing the yellow one, which is about cursors. Hopefully that’s not what Tamera planned, but anyway...) to the output from an Index Seek operator (Sebastian Meine – @sqlity). Lastly, I think everyone put in 110% effort, so that’s what all the operators cost. That didn’t leave anything for me, unfortunately, but that’s okay. Also, because he decided to use the Paul White operator, Jason Brimhall gets 0%, and his 110% was given to Paul’s Switch operator post. I hope you’ve enjoyed this T-SQL Tuesday, and have learned something extra about Plan Operators. Keep your eye out for next month’s one by watching the Twitter Hashtag #tsql2sday, and why not contribute a post to the party? Big thanks to Adam Machanic as usual for starting all this. @rob_farley

    Read the article

  • Is your test method self-validating ?

    - by mehfuzh
    Writing state of art unit tests that can validate your every part of the framework is challenging and interesting at the same time, its like becoming a samurai. One of the key concept in this is to keep our test synced all the time as underlying code changes and thus breaking them to the furthest unit as possible.  This also means, we should avoid  multiple conditions embedded in a single test. Let’s consider the following example of transfer funds. [Fact] public void ShouldAssertTranserFunds() {     var currencyService = Mock.Create<ICurrencyService>();     //// current rate     Mock.Arrange(() => currencyService.GetConversionRate("AUS", "CAD")).Returns(0.88f);       Account to = new Account { Currency = "AUS", Balance = 120 };     Account from = new Account { Currency = "CAD" };       AccountService accService = new AccountService(currencyService);       Assert.Throws<InvalidOperationException>(() => accService.TranferFunds(to, from, 200f));       accService.TranferFunds(to, from, 100f);       Assert.Equal(from.Balance, 88);     Assert.Equal(20, to.Balance); } At first look,  it seems ok but as you look more closely , it is actually doing two tasks in one test. At line# 10 it is trying to validate the exception for invalid fund transfer and finally it is asserting if the currency conversion is successfully made. Here, the name of the test itself is pretty vague. The first rule for writing unit test should always reflect to inner working of the target code, where just by looking at their names it is self explanatory. Having a obscure name for a test method not only increase the chances of cluttering the test code, but it also gives the opportunity to add multiple paths into it and eventually makes things messy as possible. I would rater have two test methods that explicitly describes its intent and are more self-validating. ShouldThrowExceptionForInvalidTransferOperation ShouldAssertTransferForExpectedConversionRate Having, this type of breakdown also helps us pin-point reported bugs easily rather wasting any time on debugging for something more general and can minimize confusion among team members. Finally, we should always make our test F.I.R.S.T ( Fast.Independent.Repeatable.Self-validating.Timely) [ Bob martin – Clean Code]. Only this will be enough to ensure, our test is as simple and clean as possible.   Hope that helps

    Read the article

  • Control Sysinternals Suite & NirSoft Utilities with a Single Interface

    - by Asian Angel
    Sysinternals and NirSoft both provide helpful utilities for your Windows system but may not be very convenient to access. Using the Windows System Control Center you can easily access everything through a single UI front end. Setup The first thing to do is set up three new folders in Program Files (or Program Files (x86) if you are using a 64bit system) with the following names (the first two need to exactly match what is shown here): Sysinternals Suite NirSoft Utilities (create this folder only if you have any of these apps downloaded) Windows System Control Center (or WSCC depending on your preferences) Unzip the contents of the Sysinternals Suite into its’ folder. Then unzip any individual NirSoft Utilities programs that you have downloaded into the NirSoft folder. All that is left to do is to unzip the WSCC software into its’ folder and create a shortcut. WSCC in Action When you start WSCC up for the first time you will see the following message with a brief explanation about the software. Next the options window will appear providing you an opportunity to look around and make any desired changes. WSCC can access utilities for both suites using a live connection if needed (utilities accessed live are not downloaded). Note: This occurs on the first run only. This is the main WSCC window…you can choose the utility that you want to use by sorting through an all items list or based on category. Note: WSCC may occasionally experience a problem downloading a particular utility if using the live service. We conducted a quick test by accessing two Sysinternals apps. First PsInfo… Followed by DiskView. Both opened quickly and were ready to go. There were no NirSoft Utilities installed on our test system in order to provide a live access example. Within moments WSCC accessed the CurrProcess utility and had it running on our system. Our recommendation is to download your favorite utilities from both suites (in order to always have easy access to them). Conclusion WSCC provides an easy way to access all of the apps in the Sysinternals Suite and NirSoft Utilities in one place. Note: A PortableApps version is also available. Links Download Windows System Control Center (WSCC) Download Windows Sysinternals Suite Download individual NirSoft Utilities programs Similar Articles Productive Geek Tips How To Get Detailed Information About Your PCAccess and Launch Windows Utilities the Easy WayWhat is svchost.exe And Why Is It Running?How to Clean Up Your Messy Windows Context MenuRemove NVIDIA Control Panel from Desktop Right-Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox)

    Read the article

  • World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

    - by Brian
    Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark. The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB. This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour. The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11. Performance Landscape Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch. PeopleSoft Financials Benchmark, Version 9.1 Solution Under Test Batch (min) SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92 Results from PeopleSoft Financials Benchmark 9.0. PeopleSoft Financials Benchmark, Version 9.0 Solution Under Test Batch (min) Batch with Online (min) SPARC Enterprise M4000 (Web/App) SPARC Enterprise M5000 (DB) 33.09 34.72 SPARC T3-1 (Web/App) SPARC Enterprise M5000 (DB) 35.82 37.01 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 128 GB memory Storage Configuration: 1 x Sun Storage F5100 Flash Array (for database and redo logs) 2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup) Software Configuration: Oracle Solaris 11 11/11 SRU 7.5 Oracle Database 11g Release 2 (11.2.0.3) PeopleSoft Financials 9.1 Feature Pack 2 PeopleSoft Supply Chain Management 9.1 Feature Pack 2 PeopleSoft PeopleTools 8.52 latest patch - 8.52.03 Oracle WebLogic Server 10.3.5 Java Platform, Standard Edition Development Kit 6 Update 32 Benchmark Description The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN PeopleSoft Financial Management oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • Search and Browse Database Objects with Oracle SQL Developer

    - by thatjeffsmith
    I was tempted to throw in another Dora the Explorer Map reference here, but I came to my senses.Having trouble finding something? Maybe you’re just getting older? I know I am. But still, it’d be nice if my favorite database tool could help me out a bit. Hmmm, what’s this ‘Find Database Object‘ thing over here…sounds like a search mechanism of some sort? You can access this panel from the ‘View‘ menu. It’s a good bit down the screen, so I don’t blame you if you haven’t seen it before. It makes finding ‘stuff’ in your database so much easier. Let’s say I want to find my ‘beer’ objects. I simply need to type my search string and the context (in this case I want it to search EVERYTHING), and hit enter. The search results are listed below and clicking on an object automatically opens it! I know it seems very simple, but I get asked this question a LOT. It will even search through your PL/SQL code! Finding too much? Be sure to toggle off the ‘%’ wildcard check box before doing a search. Working on a Project? I bet you use common column names, or codes, throughout your tables. You could take advantage of this knowledge and use the Find Database Object panel as a substitute connection tree or schema browser. Working on your HR project and want to look at your employee objects? Do a column search for your column ID/key. Sometimes thinking outside the box actually works! Don’t be afraid to tackle a problem from a weird angle, or re-purpose your tools. I do it all the time And I drive the developers nuts trying to do things with the tools they were never designed to do. But I digress. Back to your coding!

    Read the article

  • View HTML Tags and Webpage Combined in Firefox

    - by Asian Angel
    Do you want an easier way to see a webpage’s html tags without viewing the source code in a separate window? Now you can view the webpage and tags combined in the same window using the X-Ray extension for Firefox. Before Usually if you want to see the source code behind a webpage you have to view it in a separate window. If you are only interested in a specific section then you have to search through the entire set of code just to find what you are looking for. After The X-Ray extension will let you see the document’s tags (including class and ID names) “side by side” with the webpage in the same tab. You can use either the context menu or the tools menu to access the X-Ray command. Here is the same webpage section shown in the first screenshot above. It may look a little odd at first until you get used to seeing both together. Note: You can return the webpage to its’ normal view by either clicking on the X-Ray command again or refreshing the page. The code for part of the sidebar on the same webpage… Followed by one of the sets of links at the end. Looking at another example suppose you are interested in how part of the main feed is set up. Being able to see how a particular element is set up directly in the webpage is certainly better than searching through the entire page of code. Conclusion If you design webpages and want an easy way to see how someone else’s website is coded then you may want to give this extension a try. Links Download the X-Ray extension (Mozilla Add-ons) Similar Articles Productive Geek Tips View Webpage Source Code in Tabs in FirefoxCreate Pre-Formatted Links in FirefoxRemove Webpage Formatting or View the HTML Code When Copying in FirefoxInsert Special Characters & Coding in Online Forms in FirefoxCombine the Address Bar and Progress Bar Together in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar

    Read the article

  • First steps into css - aligning data insite one DIV [on hold]

    - by Andrew
    I am trying to move away from tables, and start doing CSS. Here is my HTML code that I currently trying to place into a nice looking container. <div> <div> <h2>ID: 4000 | SSN#: 4545</h2> </div> <div> <img src="./images/tenant/unknown.png"> </div> <div> <h3>Names Used</h3> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> </div> <div> <h3>Phones Used</h3> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> </div> <div> <h3>Addresses Used</h3> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> </div> </div> I now understand now I create classes and assign classes to elements. I have no issues doing colors. But I am very confused with elements alignments. Could you suggest a nice way to pack it together with some CSS which I can analyze and take as a CSS starting learning point?

    Read the article

  • Remove the Lock Icon from a Folder in Windows 7

    - by Trevor Bekolay
    If you’ve been playing around with folder sharing or security options, then you might have ended up with an unsightly lock icon on a folder. We’ll show you how to get rid of that icon without over-sharing it. The lock icon in Windows 7 indicates that the file or folder can only be accessed by you, and not any other user on your computer. If this is desired, then the lock icon is a good way to ensure that those settings are in place. If this isn’t your intention, then it’s an eyesore. To remove the lock icon, we have to change the security settings on the folder to allow the Users group to, at the very least, read from the folder. Right-click on the folder with the lock icon and select Properties. Switch to the Security tab, and then press the Edit… button. A list of groups and users that have access to the folder appears. Missing from the list will be the “Users” group. Click the Add… button. The next window is a bit confusing, but all you need to do is enter “Users” into the text field near the bottom of the window. Click the Check Names button. “Users” will change to the location of the Users group on your particular computer. In our case, this is PHOENIX\Users (PHOENIX is the name of our test machine). Click OK. The Users group should now appear in the list of Groups and Users with access to the folder. You can modify the specific permissions that the Users group has if you’d like – at the minimum, it must have Read access. Click OK. Keep clicking OK until you’re back at the Explorer window. You should now see that the lock icon is gone from your folder! It may be a small aesthetic nuance, but having that one folder stick out in a group of other folders is needlessly distracting. Fortunately, the fix is quick and easy, and does not compromise the security of the folder! Similar Articles Productive Geek Tips What is this "My Sharing Folders" Icon in My Computer and How Do I Remove It?Lock The Screen While in Full-Screen Mode in Windows Media PlayerHave Windows Notify You When You Accidentally Hit the Caps Lock KeyWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Create Shutdown / Restart / Lock Icons in Windows 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • ROA on top of SOA

    - by Vaibhav Pujari
    I already have a stable Service Oriented Architecture for my application which exposes services as API calls. (the verbs) Now, I need to build a Resource Oriented Architecture to expose a RESTful API to interact with the application objects. (the nouns) What are the best practices to reuse the existing services: - without any persistence inside my new code. - without putting unnecessary logic into the REST layer i.e. it should ideally just leverage the services provided by SOA API. I want this layer to be as thin as possible. - without modifying the existing SOA API - allow easy extension of the REST API i.e. it should be easy to add more resources without changing the (yet to be written) core code. (I want to make resource names and their associated actions configurable so more contributors can easily add resources without a need to understand my module) Any advices/suggestions how to achieve this? Edit: Adding more info My Stack: My existing stacks is in Java. But since I plan to just use the services, I don't think that should affect the design of new REST code. I am planning to implement the new REST code in PHP. How well the services map to resources? Some services are mapped well i.e. there are services for creating, updating application objects. But for other application objects, there are no direct services available. More importantly, there are actions beyond just create, update etc. that apply to application objects. And I would like to provide some way for these actions to be exposed through REST. Since these are verbs, how do I deal with them? Where exactly I need help? I would appreciate any help towards high level design to accomplish the task along-with making the framework extendible. For instance, tomorrow there are some new services added to my SOA layer, I want to make sure it is easy for a fresh developer to write a REST call by simply registering a new resource (in a config file/db) and write code for connecting it with SOA calls. Just like plugin.

    Read the article

  • SQL SERVER – NTFS File System Performance for SQL Server

    - by pinaldave
    Note: Before practicing any of the suggestion of this article, consult your IT Infrastructural Admin, applying the suggestion without proper testing can only damage your system. Question: “Pinal, we have 80 GB of data including all the database files, we have our data in NTFS file system. We have proper backups are set up. Any suggestion for our NTFS file system performance improvement. Our SQL Server box is running only SQL Server and nothing else. Please advise.” When I receive questions which I have just listed above, it often sends me deep thought. Honestly, I know a lot but there are plenty of things, I believe can be built with community knowledge base. Today I need you to help me to complete this list. I will start the list and you help me complete it. NTFS File System Performance Best Practices for SQL Server Disable Indexing on disk volumes Disable generation of 8.3 names (command: FSUTIL BEHAVIOR SET DISABLE8DOT3 1) Disable last file access time tracking (command: FSUTIL BEHAVIOR SET DISABLELASTACCESS 1) Keep some space empty (let us say 15% for reference) on drive is possible (Only on Filestream Data storage volume) Defragement the volume Add your suggestions here… The one which I often get a pretty big debate is NTFS allocation size. I have seen that on the disk volume which stores filestream data, when increased allocation to 64K from 4K, it reduces the fragmentation. Again, I suggest you attempt this after proper testing on your server. Every system is different and the file stored is different. Here is when I would like to request you to share your experience with related to NTFS allocation size. If you do not agree with any of the above suggestions, leave a comment with reference and I will modify it. Please note that above list prepared assuming the SQL Server application is only running on the computer system. The next question does all these still relevant for SSD – I personally have no experience with SSD with large database so I will refrain from comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Problem booting Windows after failed installation Ubuntu 12.04 alongside Windows 7

    - by Tassos
    I tried to install in my laptop Ubuntu 12.04 so that I can dual-boot with Windows 7. I made some mistakes during this process and I didn't manage to install Ubuntu. But my real problem now is that I'm afraid that I also destroyed the installation of Windows 7. Your help would be precious for me. Here are the details of what I did: 1) I followed these instructions to dual boot Ubuntu 12.04 and Windows 7: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ The only difference from what is described above, is that in my case the device names where: /dev/mapper/isw_fdjdhbadc_Volume0* instead of: /dev/sda* Note that I had created a bootable USB stick to do that. 2) The installation proceeded normally, but in the end I got a fatal error because the grub-install failed. 3) Then, after googling this problem, I runned ubuntu from the USB stick and run this command: sudo grub-install --root-directory=/home/ubuntu/temp /dev/mapper/isw_fdjdhbadc_Volume0p5 (/isw_fdjdhbadc_Volume0p5 was the partition that I had made for /boot) but this command also failed. 4) Then, I did something stupid (I think): I run the above command as: sudo grub-install --root-directory=/home/ubuntu/temp /dev/mapper/isw_fdjdhbadc_Volume0 namely I tried to install grub in the device isw_fdjdhbadc_Volume0 instead of the boot partition isw_fdjdhbadc_Volume0p5 The above command did not fail and was executed ok. 5) After that, I tried to boot my laptop, but it seemed that I had no operation system. Not even windows were detected. 6) I thought that I should uninstall grub from isw_fdjdhbadc_Volume0. So following some online instructions that I found, I booted again Ubuntu from the USB stick and run the following command (this was stupid since the instructions were for a totally different case than mine): sudo dd if=/dev/zero of=/dev/mapper/isw_fdjdhbadc_Volume0 bs=446 count=1 Afte that, I was still unable to boot Windows. I realize that I deleted something that I shouldn't, but I'm hope that this is not crucial and I can recover somehow. When I boot Ubuntu from the USB, I can see that the partition with Windows is still there, with all the directories, Windows files, my data etc. So, my question is: Is there a way to undo the mistakes that I desribed above and recover Windows 7? This is my major question. After solving that, I'd also like to know what I did wrong with the installation of Ubuntu. Thanks in advance for you valuable help!

    Read the article

  • Summit Old, Summit New, Summit Borrowed...

    - by Rob Farley
    PASS Summit is coming up, and I thought I’d post a few things. Summit Old... At the PASS Summit, you will get the chance to hear presentations by the SQL Server establishment. Just about every big name in the SQL Server world is a regular at the PASS Summit, so you will get to hear and meet people like Kalen Delaney (@sqlqueen) (who just recently got awarded MVP status for the 20th year running), and from all around the world such as the UK’s Chris Webb (@technitrain) or Pinal Dave (@pinaldave) from India. Almost all the household names in SQL Server will be there, including a large contingent from Microsoft. The PASS Summit is by far the best place to meet the legends of SQL Server. And they’re not all old. Some are, but most of them are younger than you might think. ...Summit New... The hottest topics are often about the newest technologies (such as SQL Server 2012). But you will almost certainly learn new stuff about older versions too. But that’s not what I wanted to pick on for this point. There are many new speakers at every PASS Summit, and content that has not been covered in other places. This year, for example, LobsterPot’s Roger Noble (@roger_noble) is giving a presentation for the first time. He’s a regular around the Australian circuit, but this is his first time presenting to a US audience. New Zealand’s Paul White (@sql_kiwi) is attending his first PASS Summit, and will be giving over four hours of incredibly deep stuff that has never been presented anywhere in the US before (I can’t say the world, because he did present similar material in Adelaide earlier in the year). ...Summit Borrowed... No, I’m not talking about plagiarism – the talks you’ll hear are all their own work. But you will get a lot of stuff you’ll be able to take back and apply at work. The PASS Summit sessions are not full of sales-pitches, telling you about how great things could be if only you’d buy some third-party vendor product. It’s simply not that kind of conference, and PASS doesn’t allow that kind of talk to take place. Instead, you’ll be taught techniques, and be able to download scripts and slides to let you perform that magic back at work when you get home. You will definitely find plenty of ideas to borrow at the PASS Summit. ...Summit Blue Yeah – and there’s karaoke. Blue - Jason - SQL Karaoke - YouTube

    Read the article

  • Why Does Ejabberd Start Fail?

    - by Andrew
    I am trying to install ejabberd 2.1.10-2 on my Ubuntu 12.04.1 server. This is a fresh install, and ejabberd is never successfully installed. The Install Every time, apt-get hangs on this: Setting up ejabberd (2.1.10-2ubuntu1) ... Generating SSL certificate /etc/ejabberd/ejabberd.pem... Creating config file /etc/ejabberd/ejabberd.cfg with new version Starting jabber server: ejabberd............................................................ failed. The dots just go forever until it times out or I 'killall' beam, beam.smp, epmd, and ejabberd processes. I've turned off all firewall restrictions. Here's the output of epmd -names while the install is hung: epmd: up and running on port 4369 with data: name ejabberdctl at port 42108 name ejabberd at port 39621 And after it fails: epmd: up and running on port 4369 with data: name ejabberd at port 39621 At the same time (during and after), the output of both netstat -atnp | grep 5222 and netstat -atnp | grep 5280 is empty. The Crash File A crash dump file is create at /var/log/ejabber/erl_crash.dump. The slogan (i.e. reason for the crash) is: Slogan: Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) It's alive? Whenever I try to relaunch ejabberd with service ejabberd start, the same thing happens - even if I've killed all processes before doing so. However, when I killall the processes listed above again, and run su - ejabberd -c /usr/sbin/ejabberd, this is the output I get: Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:0] [kernel-poll:false] Eshell V5.8.5 (abort with ^G) (ejabberd@ns1)1> =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.478.0>:ejabberd_listener:166) : Reusing listening port for 5222 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.479.0>:ejabberd_listener:166) : Reusing listening port for 5269 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.480.0>:ejabberd_listener:166) : Reusing listening port for 5280 =INFO REPORT==== 15-Oct-2012::12:26:13 === I(<0.40.0>:ejabberd_app:72) : ejabberd 2.1.10 is started in the node ejabberd@ns1 Then, the server appears to be running. I get a login prompt when I access http://mydomain.com:5280/admin/. Of course I can't login unless I create an account. At this time, the output of netstat -atnp | grep 5222 and netstat -atnp | grep 5280 is as follows: tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN 19347/beam tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN 19347/beam ejabberdctl Even when it appears ejabberd is running, trying to do anything with ejabberdctl fails. For example: trying to register a user: root@ns1:~# ejabberdctl register myusername mydomain.com mypassword Failed RPC connection to the node ejabberd@ns1: nodedown I have no idea what I'm doing wrong. This happens on two different servers I have with identical software installed (really not much of anything). Please help. Thanks.

    Read the article

  • Where to draw the line between development-led security and administration-led security?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers who do not listen to this attribute, strange hacks *\ are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingenuity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >