Search Results

Search found 16429 results on 658 pages for 'account names'.

Page 489/658 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • A couple of nice features when using OracleTextSearch

    - by kyle.hatlestad
    If you have your UCM/URM instance configured to use the Oracle 11g database as the search engine, you can be using OracleTextSearch as the search definition. OracleTextSearch uses the advanced features of Oracle Text for indexing and searching. This includes the ability to specify metadata fields to be optimized for the search index, fast rebuilding, and index optimization. If you are on 10g of UCM, then you'll need to load the OracleTextSearch component that is available in the CS10gR35UpdateBundle component on the support site (patch #6907073). If you are on 11g, no component is needed. Then you specify the search indexer name with the configuration flag of SearchIndexerEngineName=OracleTextSearch. Please see the docs for other configuration settings and setup instructions. So I thought I would highlight a couple of other unique features available with OracleTextSearch. The first is the Drill Down feature. This feature allows you to specify specific metadata fields that will break down the results of that field based on the total results. So in the above graphic, you can see how it broke down the extensions and gives a count for each. Then you just need to click on that link to then drill into that result. This setting is perfect for option list fields and ones with a distinct set of values possible. By default, it will use the fields Type, Security Group, and Account (if enabled). But you can also specify your own fields. In 10g, you can use the following configuration entry: DrillDownFields=xWebsiteObjectType,dExtension,dSecurityGroup,dDocType And in 11g, you can specify it through the Configuration Manager applet. Simply click on the Advanced Search Design, highlight the field to filter, click Edit, and check 'Is a filter category'. The other feature you get with OracleTextSearch are search snippets. These snippets show the occurrence of the search term in context of their usage. This is very similar to how Google displays its results. If you are on 10g, this is enabled by default. If you are on 11g, you need to turn on the feature. The following configuration entry will enable it: OracleTextDisableSearchSnippet=false Once enabled, you can add the snippets to your search results. Go to Change View -> Customize and add a new search result view. In the Available Fields in the Special section, select Snippet and move it to the Main or Additional Information. If you want to include the snippets with the Classic results, you can add the idoc variable of <$srfDocSnippet$> to display them. One caveat is that this can effect search performance on large collections. So plan the infrastructure accordingly.

    Read the article

  • On SQL Developer and TNSNAMES.ORA

    - by thatjeffsmith
    Tnsnames.ora [DOCS] is a configuration file for SQL*Net that describes the network service names for the databases in your organization. Basically, it tells Oracle applications how to find your databases. This post is just a quick overview on how to get SQL Developer to ‘see’ this file and define a connection. There’s only a single prerequisite for having SQL Devleoper setup such that it can use TNSNAMES to connect: You have somewhere a tnsnames.ora file You don’t need a client, instant or otherwise, on your machine. You just need the file. Now, if you DO you have a client or HOME on your machine, SQL Developer will look for those and find the tnsnames file for you. IF we can’t find it at the usual places, you can simply tell us where it is via this preference: On the Database – Advanced page Once you’ve done this, assuming you have a file (or 10) in that directory, we’ll read it, parse it, and list the entries in the connection dialog. The File(s) That’s right, files. Just like SQL*Plus, we’ll read any file that starts with ‘tnsnames’ – that includes files you’ve renamed to .bak or .old. Kris talks about that more here. I have just the one, which is all I need anyway. There we go! Defining the Connection Just set the connection type to TNS. This is a lot easier to do than manually defining the connections – esp as they’re likely to frequently change in ‘the real world.’ No Client or Home Required That’s right. You don’t need an Oracle Client or $ORACLE_HOME to have SQL Developer see and read a TNS file. Just so you know I’m not cheating… SQL Dev doesn’t know which client to use and won’t use it even if it DID know… I’m able to define a new connection AND connect with these preferences ON|OFF.

    Read the article

  • Windows and SQL Azure Best Practices: Affinity Groups

    - by BuckWoody
    When you create a Windows Azure application, you’ll pick a subscription to put it under. This is a billing container - underneath that, you’ll deploy a Hosted Service. That holds the Web and Worker Roles that you’ll deploy for your applications. along side that, you use the Storage Account to create storage for the application. (In some cases, you might choose to use only storage or Roles - the info here applies anyway) As you are setting up your environment, you’re asked to pick a “region” where your application will run. If you choose a Region, you’ll be asked where to put the Roles. You’re given choices like Asia, North America and so on. This is where the hardware that physically runs your code lives. We have lots of fault domains, power considerations and so on to keep that set of datacenters running, but keep in mind that this is where the application lives. You also get this selection for Storage Accounts. When you make new storage, it’s a best practice to put it where your computing is. This makes the shortest path from the code to the data, and then back out to the user. One of the selections for the location is “Anywhere U.S.”. This selection might be interpreted to mean that we will bias towards keeping the data and the code together, but that may not be the case. There is a specific abstraction we created for just that purpose: Affinity Groups. An Affinity Group is simply a name you can use to tie together resources. You can do this in two places - when you’re creating the Hosted Service (shown above) and on it’s own tree item on the left, called “Affinity Groups”. When you select either of those actions, You’re presented with a dialog box that allows you to specify a name, and then the Region that  names ties the resources to. Now you can select that Affinity Group just as if it were a Region, and your code and data will stay together. That helps with keeping the performance high. Official Documentation: http://msdn.microsoft.com/en-us/library/windowsazure/hh531560.aspx

    Read the article

  • AxCMS.net 10 with Microsoft Silverlight 4 and Microsoft Visual Studio 2010

    - by Axinom
    Axinom, European WCM vendor, today announced the next version of its WCM solution AxCMS.net 10, which streamlines the processes involved in creating, managing and distributing corporate content on the internet. The new solution helps reducing ongoing costs for managing and distributing to large audiences, while at the same time drastically reducing time-to-market and one-time setup costs. http://www.AxCMS.net Axinom’s WCM portfolio, based on the Microsoft .NET Framework 4, Microsoft Visual Studio 2010 and Microsoft Silverlight 4, allows enterprises to increase process efficiency, reduce operating costs and more effectively manage delivery of rich media assets on the Web and mobile devices. Axinom solutions are widely used by major European online brands in IT, telco, retail, media and entertainment industries such as Siemens, American Express, Microsoft Corp., ZDF, Pro7Sat1 Media, and Deutsche Post. Brand New User Interface built with Silverlight 4By using Silverlight 4, Axinom’s team created a new user interface for AxCMS.net 10 that is optimized for improved usability and speed. WYSIWYG mode, integrated image editor, extended list views, and detail views of objects allow a substantial acceleration of typical editor tasks. Axinom’s team worked with Silverlight Rough Cut Editor for video management and Silverlight Analytics Framework for extended reporting to complete the wide range of capabilities included in the new release. “Axinom’s release of AxCMS.net 10 enables developers to take advantage of the latest features in Silverlight 4,” said Brian Goldfarb, director of the developer platform group at Microsoft Corp. “Microsoft is excited about the opportunity this creates for Web developers to streamline the creating, managing and distributing of online corporate content using AxCMS.net 10 and Silverlight.” Rapid Web Development with Visual Studio 2010AxCMS.net 10 is extended by additional products that enable developers to get productive quickly and help solve typical customer scenarios. AxCMS.net template projects come with documented source code that help kick-start projects and learn best practices in all aspects of Web application development. AxCMS.net overcomes many hard-to-solve technical obstacles in an out-of-the-box manner by providing a set of ready-to-use vertical solutions such as corporate Web site, Web shop, Web campaign management, email marketing, multi-channel distribution, management of rich Internet applications, and Web business intelligence. Extended Multi-Site ManagementAxCMS.net has been supporting the management of an unlimited number of Web sites for a long time. The new version 10 of AxCMS.net will further improve multi-site management and provide features to editors and developers that will simplify and accelerate multi-site and multi-language management. Extended publication workflow will take into account additional dependencies of dynamic objects, pages, and documents. “The customer requests evolved from static html pages to dynamic Web applications content with the emergence of rich media assets seamlessly combined across many channels including Web, mobile and IPTV. With the.NET Framework 4 and Silverlight 4, we’re on the fast track to making the three screen strategy a reality for our customers,” said Damir Tomicic, CEO of Axinom Group. “Our customers enjoy substantial competitive advantages of using latest Microsoft technologies. We have a long-standing, relationship with Microsoft and are committed to continued development using Microsoft tools and technologies to deliver innovative Web solutions in the future.”  

    Read the article

  • Best development architecture for a small team of programmers

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with LAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with LAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards...

    Read the article

  • Super-Charge GIMP’s Image Editing Capabilities with G’MIC [Cross-Platform]

    - by Asian Angel
    Recently we showed you how to enhance GIMP’s image editing power and today we help you super-charge GIMP even more. G’MIC (GREYC’s Magic Image Converter) will add an impressive array of filters and effects to your GIMP installation for image editing goodness. Note: We applied the Contrast Swiss Mask filter to the image shown in the screenshot above to create a nice, warm sunset effect. To add the new PPA open the Ubuntu Software Center, go to the Edit Menu, and select Software Sources. Access the Other Software Tab in the Software Sources Window and add the first of the PPAs shown below (outlined in red). The second PPA will be automatically added to your system. Once you have the new PPAs set up, go back to the Ubuntu Software Center and do a search for “G’MIC”. You will find two listings available and can select either one to add G’MIC to your system (both work equally well). Click on More Info for the listing that you choose and scroll down to where Add-ons are listed. Make sure to select the Add-on listed, click Apply Changes when it appears, and then click Install. We have both shown here for your convenience… When you get ready to use G’MIC to enhance an image, go to the Filters Menu and select G’MIC. A new window will appear where you can select from an impressive array of filters available for your use. Have fun! Command Line Installation For those of you who prefer using the command line for installation use the following commands: sudo add-apt-repository ppa:ferramroberto/gimp sudo apt-get update sudo apt-get install gmic gimp-gmic Links Note: G’MIC is available for Linux, Windows, and Mac. G’MIC PPA at Launchpad [via Web Upd8] G’MIC Homepage at Sourceforge *Downloads for all three platforms available here. Bonus The anime wallpaper shown in the screenshots above can be found here: anime sport [DesktopNexus] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Selling Solutions, Not Products

    - by David Dorf
    When I think about next-generation retailers, the names that come to mind are Apple, Whole Foods, Lulu Lemon, and IKEA.  They may not be the biggest retailers, but they are certainly growing fast. Success is never defined by just one dimension, and these retailers execute well across many dimensions, but the one that stands out for me is customer experience.  These stores feel...approachable...part of the community...local.  Customers are not intimidated to ask questions, and staff seem to go out of their way to help. What's makes these retailers stand out in the industry?  These retailers aren't selling products -- they're selling solutions.  Think about that.  You think you're going to the Apple store to buy a phone, but you're actually buying a communications solution that handles much, much more.  If you carry an iPhone, your life has changed.  The way you do things is different.  The impacts go much beyond a simple phone. Solutions start with a problem, which is why these retailers greet customers with "what brought you in today," or "can I answer any questions for you?"  Good retailers establish a relationship, even if it lasts only a few minutes. You don't walk into Whole Foods looking for cans of soup.  You are looking for meals: healthy snacks, interesting lunches, exotic dinners.  Its a learning experience where you might discover solutions to problems you didn't know you had.  Mention what foods you like, and you'll get a list of similar items you had not considered.  I didn't know I needed a closet organizer until I visited an IKEA and learned about all the options.  They were able to customize the solution to meet my needs, and now I'm much more organized. One of the differences between selling products and selling solutions is training.  Visit any of these retailers' sites and you'll see a long list of in-store events for the benefit of customers.  You can buy exercise clothing from Lulu Lemon, and also learn new yoga techniques, meet like-minded people, and branch off to other fitness regimes via their ambassadors.  You can visit the Geek Bar at Apple, eat lunch at IKEA, and learn to cook at Whole Foods. These retailers are making an investment in a relationship with their customers.  They are showing loyalty to their customers before asking for it back.  In the long-run, this strategic approach will outlive any scan-and-bag mentality.

    Read the article

  • Passthrough Objects – Duck Typing++

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Can't see a genuine use for this, but I got the idea in my head and wanted to work it through. It's an extension to the idea of duck typing, for scenarios where types have similar behaviour, but implemented in differently-named members. So you may have a set of objects you want to treat as an interface, which don't implement the interface explicitly, and don't have the same member names so they can't be duck-typed into implicitly implementing the interface. In a fictitious example, I want to call Get on whichever ICache implementation is current, and have the call passed through to the relevant method – whether it's called Read, Retrieve or whatever: A sample implementation is up on github here: PassthroughSample. This uses Castle's DynamicProxy behind the scenes in the same way as my duck typing sample, but allows you to configure the passthrough to specify how the inner (implementation) and outer (interface) members are mapped:       var setup = new Passthrough();     var cache = setup.Create("PassthroughSample.Tests.Stubs.AspNetCache, PassthroughSample.Tests")                             .WithPassthrough("Name", "CacheName")                             .WithPassthrough("Get", "Retrieve")                             .WithPassthrough("Set", "Insert")                             .As<ICache>(); - or using some ugly Lambdas to avoid the strings :     Expression<Func<ICache, string, object>> get = (o, s) => o.Get(s);     Expression<Func<Memcached, string, object>> read = (i, s) => i.Read(s);     Expression<Action<ICache, string, object>> set = (o, s, obj) => o.Set(s, obj);     Expression<Action<Memcached, string, object>> insert = (i, s, obj) => i.Put(s, obj);       ICache cache = new Passthrough<ICache, Memcached>()                     .Create()                     .WithPassthrough(o => o.Name, i => i.InstanceName)                     .WithPassthrough(get, read)                     .WithPassthrough(set, insert)                     .As();   - or even in config:   ICache cache = Passthrough.GetConfigured<ICache>(); ...  <passthrough>     <types>       <typename="PassthroughSample.Tests.Stubs.ICache, PassthroughSample.Tests"             passesThroughTo="PassthroughSample.Tests.Stubs.AppFabricCache, PassthroughSample.Tests">         <members>           <membername="Name"passesThroughTo="RegionName"/>           <membername="Get"passesThroughTo="Out"/>           <membername="Set"passesThroughTo="In"/>         </members>       </type>   Possibly useful for injecting stubs for dependencies in tests, when your application code isn't using an IoC container. Possibly it also has an alternative implementation using .NET 4.0 dynamic objects, rather than the dynamic proxy.

    Read the article

  • How to Disable Access to the Registry in Windows 7

    - by Mysticgeek
    If you don’t know what your doing in the Registry, you can mess up your computer pretty good. Today we show you how to prevent users from accessing the Registry and making any changes to it. Using Local Group Policy Editor Note: This method uses Group Policy Editor which is not available in Home versions of Windows. First type gpedit.msc into the Search box in the Start menu. When Group Policy Editor opens, navigate to User Configuration \ Administrative Templates then select System. Under Setting in the right panel double-click on Prevent access to registry editing tools. Select the radio button next to Enabled, click OK, then close out of Group Policy Editor. Now if a user tries to access the Registry… They will get the following message advising they cannot access it.   Using Registry Enabler & Disabler 3 If you’re using Home or Starter version of Windows 7, you can use a neat utility called Registry Enabler & Disabler (link below). This app works on XP and Vista as well. There is no installation involved so you can run it from a flash drive, disable the registry, then take the flash drive with you while a the user is on the machine.   Again, if the user tries to access the Registry they will get the following error… Using one of these options will stop users from gaining access to the Registry or running any registry hacks. Of course if you have a shared computer, you may want to set up other users with a Standard Account, as they won’t be able to make changes to the Registry anyway. Download Registry Enabler & Disabler 3 Similar Articles Productive Geek Tips Disable Notification Balloons in XPDisable/Enable Lock Workstation Functionality (Windows + L)Disable Windows Mobility Center in Windows 7 or VistaRegistry Hack to Disable Writing to USB DrivesSpeed Up Disk Access by Disabling Last Access Updating in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • SQLAuthority News – Social Media Series – LinkedIn and Professional Profile

    - by pinaldave
    Pinal Dave on LinkedIn! It seems like a few year ago, there was a big “boom” in social media websites.  All of a sudden there were so many sites to choose from.  MySpace or Orkut?  Blogging websites for your business or a LinkedIn account?  The nature of the internet is to always be changing, but I believe that out of this huge growth of websites, a few have come to stay.  Facebook is obviously the leader in social media networking, especially for your personal life.  Blogging is great, but it can be more of a way to get your ideas out there, rather than a place for people to connect to you professionally.  If you want to have a professional “face” on the internet, LinkedIn is the way to go. LinkedIn is best explained as “professional Facebook.”  This is simplifying things a little bit too much, but it is certainly a website where you link up with professional contacts, so that others can see where you have worked, who you have worked with, and what projects you have done.  This is a much better place for professional contacts to find you than someplace like Facebook, where all they will see is your face and maybe picture of you at a birthday party or something like that! Because so much of my SQL Server life is conducted on the internet, especially on my blog, I felt that it would be a good idea to have a well-maintained LinkedIn web page as well, so that if anyone is curious about me and my credentials they can quickly and easily find me and see that I am for real, and not someone pretending to know a lot about SQL Server. My linked in profile is www.linkedin.com/in/pinaldave.  I keep all my professional information here, and I update it as often as possible.  Feel free to come find me, especially if you would like to “link up” and share professional information.  The technology world is becoming more and more interconnected, and more and more international.  I feel that it is very important to stay linked up virtually, because so many of us are so far apart physically. I try to keep very connected with my LinkedIn profile.  I let anyone connect with me, and I read updates from the professional world very often.  I keep this profile updated, but do not post things about my personal life or anything that I might put on Twitter, for example.  I also include my e-mail address here, if you would like to contact me professionally.  This is the best place for me to conduct business. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Social Media

    Read the article

  • Earthquake Locator - Live Demo and Source Code

    - by Bobby Diaz
    Quick Links Live Demo Source Code I finally got a live demo up and running!  I signed up for a shared hosting account over at discountasp.net so I could post a working version of the Earthquake Locator application, but ran into a few minor issues related to RIA Services.  Thankfully, Tim Heuer had already encountered and explained all of the problems I had along with solutions to these and other common pitfalls.  You can find his blog post here.  The ones that got me were the default authentication tag being set to Windows instead of Forms, needed to add the <baseAddressPrefixFilters> tag since I was running on a shared server using host headers, and finally the Multiple Authentication Schemes settings in the IIS7 Manager.   To get the demo application ready, I pulled down local copies of the earthquake data feeds that the application can use instead of pulling from the USGS web site.  I basically added the feed URL as an app setting in the web.config:       <appSettings>         <!-- USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/ -->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="~/Demo/1day-M2.5.xml" />-->         <add key="FeedUrl"              value="~/Demo/7day-M2.5.xml" />     </appSettings> You will need to do the same if you want to run from local copies of the feed data.  I also made the following minor changes to the EarthquakeService class so that it gets the FeedUrl from the web.config:       private static readonly string FeedUrl = ConfigurationManager.AppSettings["FeedUrl"];       /// <summary>     /// Gets the feed at the specified URL.     /// </summary>     /// <param name="url">The URL.</param>     /// <returns>A <see cref="SyndicationFeed"/> object.</returns>     public static SyndicationFeed GetFeed(String url)     {         SyndicationFeed feed = null;           if ( !String.IsNullOrEmpty(url) && url.StartsWith("~") )         {             // resolve virtual path to physical file system             url = System.Web.HttpContext.Current.Server.MapPath(url);         }           try         {             log.Debug("Loading RSS feed: " + url);               using ( var reader = XmlReader.Create(url) )             {                 feed = SyndicationFeed.Load(reader);             }         }         catch ( Exception ex )         {             log.Error("Error occurred while loading RSS feed: " + url, ex);         }           return feed;     } You can now view the live demo or download the source code here, but be sure you have WCF RIA Services installed before running the application locally and make sure the FeedUrl is pointing to a valid location.  Please let me know if you have any comments or if you run into any issues with the code.   Enjoy!

    Read the article

  • Claims-based Identity Terminology

    - by kaleidoscope
    There are several terms commonly used to describe claims-based identity, and it is important to clearly define these terms. · Identity In terms of Access Control, the term identity will be used to refer to a set of claims made by a trusted issuer about the user. · Claim You can think of a claim as a bit of identity information, such as name, email address, age, and so on. The more claims your service receives, the more you’ll know about the user who is making the request. · Security Token The user delivers a set of claims to your service piggybacked along with his or her request. In a REST Web service, these claims are carried in the Authorization header of the HTTP(S) request. Regardless of how they arrive, claims must somehow be serialized, and this is managed by security tokens. A security token is a serialized set of claims that is signed by the issuing authority. · Issuing Authority & Identity Provider An issuing authority has two main features. The first and most obvious is that it issues security tokens. The second feature is the logic that determines which claims to issue. This is based on the user’s identity, the resource to which the request applies, and possibly other contextual data such as time of day. This type of logic is often referred to as policy[1]. There are many issuing authorities, including Windows Live ID, ADFS, PingFederate from Ping Identity (a product that exposes user identities from the Java world), Facebook Connect, and more. Their job is to validate some credential from the user and issue a token with an identifier for the user's account and  possibly other identity attributes. These types of authorities are called identity providers (sometimes shortened as IdP). It’s ultimately their responsibility to answer the question, “who are you?” and ensure that the user knows his or her password, is in possession of a smart card, knows the PIN code, has a matching retinal scan, and so on. · Security Token Service (STS) A security token service (STS) is a technical term for the Web interface in an issuing authority that allows clients to request and receive a security token according to interoperable protocols that are discussed in the following section. This term comes from the WS-Trust standard, and is often used in the literature to refer to an issuing authority. STS when used from developer point of view indicates the URL to use to request a token from an issuer. For more details please refer to the link http://www.microsoft.com/windowsazure/developers/dotnetservices/ Geeta, G

    Read the article

  • WORD CERTIFIED IMPLEMENTATION SPECIALIST EN LAAT ORACLE UNIVERSITY U ASSISTEREN HIERMEE

    - by mseika
    WORD CERTIFIED IMPLEMENTATION SPECIALIST EN LAAT ORACLE UNIVERSITY U ASSISTEREN HIERMEE Word gespecialiseerd!Oracle weet exact welke competenties implementatie specialisten moeten opbouwen en beseft de bijbehorende inspanning die hiervoor nodig is. Het nieuwe Specialized programma van Oracle PartnerNetwork biedt een scala van certificering mogelijkheden aan (Specializations) die aantonen dat de benodigde kennis en vaardigheden bij u en bij uw teamleden aanwezig zijn.Word erkend! Bevestig uw kennis en vaardigheden en ontvang de beloning die u verdient door examens te halen voor de hele portefeuille van producten en oplossingen die Oracle aanbiedt. Haal het examen en ontvang uw OPN Specialist Certificaat. Stap 1: Kies uw SpecialisatieBekijk de Specialization Guide (PDF) - ons aanbod van Specialisaties voor de individu. Stap 2: Bereik de vereiste kennis en de vaardighedenBoek een Oracle University OPN Only Bootcamp en bereik de vereiste kennis en de vaardigheden om een Certified Implementation Specialist te worden.Wij hebben voor u de volgende Bootcamps geselecteerd en de komende maanden ingepland bij Oracle University in Utrecht, The Netherlands: Boot Camp Duur Data Voorbereiding voor Specialization (Exam Code) Database Oracle Database 11g Specialist 5 21-25 jan 12 Oracle Database 11g Certified Implementation Specialist (1Z0-514) Oracle Data Warehousing 11g Implementation 5 3-7 dec 12 3-7 apr 13 Data Warehousing 11g Certified Implementation Specialist (1Z0-515) Exadata Oracle Exadata 11g Technical Boot Camp 3 28-30 jan 13 Oracle Exadata 11g Certified Implementation Specialist (1Z0-536) Fusion Middleware Oracle AIA 11g Implementation 4 20-22 feb 13 Oracle Application Integration Architecture 11g Certified Implementation Specialist (1Z0-543) Oracle BPM 11g Implementation 4 15-18 okt 12 14-17 jan 12 15-18 apr 13 Oracle Unified Business Process Management Suite 11g Billing Certified Implementation Specialist (1Z0-560) Oracle WebCenter 11g Implementation 4 10-13 okt 12 5-8 feb 13 Oracle WebCenter Portal 11g Certified Implementation Specialist (1Z0-541) Oracle Identity Administration and Analytics 11g Implementation 3 7-9 nov 12 6-8 mrt 13 Identity Administration and Analytics 11g Certified Implementation Specialist (1Z0-545) Business Intelligence and Datawarehousing Oracle BI Enterprise Edition 11g Implementation 5 24-28 sep12 11-15 mrt 13 Boek een Boot Camp: U kunt online boeken of gebruik maken van dit inschrijfformulier Prijzen: U merkt dat de ‘OPN Only’ Boot Camps in prijs sterk gereduceerd zijn en bovendien is uw OPN korting (silver, gold, platinum of diamond) nog steeds van toepassing! Stap 3: Boek en neem uw examen afBezoek de examenregistratie web-pagina en lees de instructies voor het boeken van uw examen bij een Pearson VUE Authorized Testcentrum. Examens kunnen betaald worden door één van de gratis examen vouchers die uw bedrijf heeft, door een voucher aan te schaffen bij Oracle University of met uw creditcard bij het Pearson VUE Testcentrum. Stap 4: Ontvang uw OPN Specialist CertificateGefeliciteerd! U bent nu een Certified Implementation Specialist. Heeft u meer informatie of assistentie nodig?Neem dan contact op met uw Oracle University Account Manager of met onze Education Service Desk: eMail: [email protected]:+ 31 30 66 99 244 Bij het boeken graag de volgende code vermelden: E1229

    Read the article

  • Vidalia detected that the Tor software exited unexpectedly?

    - by Rana Muhammad Waqas
    I have installed the vidalia by following these instructions everything went as they mentioned. When I started vidalia it gave me the error: Vidalia was unable to start Tor. Check your settings to ensure the correct name and location of your Tor executable is specified. I found that bug here and followed their instructions to fix it and now after that it says: Vidalia detected that the Tor software exited unexpectedly. Please check the message log for recent warning or error messages. Logs of Vidalia Oct 18 02:15:06.937 [Notice] Tor v0.2.3.25 (git-3fed5eb096d2d187) running on Linux. Oct 18 02:15:06.937 [Notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Oct 18 02:15:06.937 [Notice] Read configuration file "/home/waqas/.vidalia/torrc". Oct 18 02:15:06.937 [Notice] We were compiled with headers from version 2.0.19-stable of Libevent, but we're using a Libevent library that says it's version 2.0.21-stable. Oct 18 02:15:06.938 [Notice] Initialized libevent version 2.0.21-stable using method epoll (with changelist). Good. Oct 18 02:15:06.938 [Notice] Opening Socks listener on 127.0.0.1:9050 Oct 18 02:15:06.938 [Warning] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Oct 18 02:15:06.938 [Warning] /var/run/tor is not owned by this user (waqas, 1000) but by debian-tor (118). Perhaps you are running Tor as the wrong user? Oct 18 02:15:06.938 [Warning] Before Tor can create a control socket in "/var/run/tor/control", the directory "/var/run/tor" needs to exist, and to be accessible only by the user account that is running Tor. (On some Unix systems, anybody who can list a socket can connect to it, so Tor is being careful.) Oct 18 02:15:06.938 [Warning] Failed to parse/validate config: Failed to bind one of the listener ports. Oct 18 02:15:06.938 [Error] Reading config failed--see warnings above. Please Help !

    Read the article

  • SQLAuthority News – Social Media Series – Facebook and Google+

    - by pinaldave
    Pinal on Facebook and Google+ Unless you have been living under a rock for the last few years, you know that Facebook is the first and last word in social networking.  Everyone has a Facebook account – from your local store to the 10-year old school child.  Because of this ability to be completely connected to everyone in your entire life, keeping a Facebook page for a professional business can be tricky. For the most part, I use Facebook strictly for personal matters.  I am friends only with friends I know in the “real” world (as opposed to my “virtual” online friends) and with family, of course.  I chat with friends on Facebook and upload personal photos to share with family who are far away.  I hope this doesn’t make readers from my professional life feel left out.  You can follow me on Facebook at www.facebook.com/SQLAuth, but you should know that Twitter is probably the better place to find updates about SQL Server and my blog (you can follow me on Twitter at www.twitter.com/pinaldave). There are definitely businesses who keep in touch with their clients using Facebook, but I felt the need to keep my personal and professional life separate.  That’s why I was so excited to find out Google was coming out with their own social media site, Google+.  On Google+ I post some personal things as well, and there is a lot of overlap between what I put on Facebook and what I put on Google+.  But since Google+ has become so popular amongst the “techie” crowd, I have found that it’s a good place to follow some of the stars of the Microsoft world, like Scott Hanselman and Buck Woody. If you are also a member of Google+, I am looking to expand my circle there.  You can find me at https://plus.google.com/104990425207662620918/posts.  Google+ is the newest face in the social media world, and it still hasn’t found a good footing between personal and professional yet.  That’s why I felt it would be a good idea to jump on the site early and help them determine which way to go.  Maybe someday it will be a place where business and personal can mix. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Social Media

    Read the article

  • Silverlight Cream for March 11, 2010 -- #812

    - by Dave Campbell
    In this Issue: Walter Ferrari, Viktor Larsson, Bill Reiss(-2-, -3-, -4-), Jonathan van de Veen, Walt Ritscher, Jobi Joy, Pete Brown, Mike Taulty, and Mark Miller. Shoutouts: Going to MIX10? John Papa announced Got Questions? Ask the Experts at MIX10 Pete Brown listed The Essential WPF/Silverlight/XNA Developer and Designer Toolbox From SilverlightCream.com: How to extend Bing Maps Silverlight with an elevation profile graph - Part 2 In this second and final tutorial, Walter Ferrari adds elevation to his previous BingMaps post. I'm glad someone else worked this out for me :) Navigating AWAY from your Silverlight page Viktor Larsson has a post up on how to navigate to something other than your Silverlight page like maybe a mailto ... SilverSprite: Not just for XNA games any more Bill Reiss has a new version of SilverSprite up on CodePlex and if you're planning on doing any game development, you should check this out for sure Space Rocks game step 1: The game loop Bill Reiss has a tutorial series on Game development that he's beginning ... looks like a good thing to jump in on and play along. This first one is all about the game loop. Space Rocks game step 2: Sprites (part 1) In Part 2, Bill Reiss begins a series on Sprites in game development and positioning it. Space Rocks game step 3: Sprites (part 2) Bill Reiss's Part 3 is a follow-on tutorial on Sprites and moving according to velocity... fun stuff :) Adventures while building a Silverlight Enterprise application part No. 32 Jonathan van de Veen is discussing debugging and the evil you can get yourself wrapped up in... his scenario is definitely one to remember. Streaming Silverlight media from a Dropbox.com account Read the comments and the agreements, but I think Walt Ritscher's idea of using DropBox to serve up Streaming media is pretty cool! UniformGrid for Silverlight Jobi Joy wanted a UniformGrid like he's familiar with in WPF. Not finding one in the SDK or Toolkit, he converted the WPF one to Silverlight .. all good for you and me :) How to Get Started in WPF or Silverlight: A Learning Path for New Developers Pete Brown has a nice post up describing resources, tutorials, blogs, and books for devs just getting into Silveright or WPF, and thanks for the shoutout, Pete! Silverlight 4, MEF and the DeploymentCatalog ( again :-) ) Mike Taulty is revisiting the DeploymentCatalog to wrap it up in a class like he did the PackageCatalog previously MVVM with Prism 101 – Part 6b: Wrapping IClientChannel Mark Miller is back with a Part 6b on MVVM with Prism, and is answering some questions from the previous post and states his case against the client service proxy. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • links for 2010-04-27

    - by Bob Rhubart
    @oracletechnet: Oracle Technology Network Newsletters Revisited "You may find this hard to believe, but some analysts contend that email newsletters are still among the most preferred methods of "information awareness" by developers today. And in our experience, the numbers back it up: subscriptions to Oracle Technology Network newsletters grow organically by 15% every year, even after you take continual list cleanup into account. " -- Justin Kestelyn (tags: oracle otn newsletters developers architects) Sylvain Duloutre: Directory Services as a Web Service Sylvain Duloutre shares a WSDL file he created to deal with issues involved in XML binding generation. (tags: oracle sun wsdl webservices DSEE netbeans jdeveloper) Nick Wooler: Iron-Clad Cloud: Secure Cloud Computing "One solution to the security problem with cloud services can be overcome using Service Oriented Security. The Oracle approach to using Service Oriented Security allows developers to pull from a centralized, authoritative source of identity services. This allows developers to build security into every application from the inside-out. This is critical to ensuring this is done in a standardized manner and most importantly it allows developers to develop without being security experts." -- Nick Wooler (tags: oracle sun security cloud saas) Andy Mulholland: A week of visits; Cisco, HP, Oracle, SAP and VMware (in alphabetical order!) "I now am considering that we should be thinking about ‘clouds’ in virtual way, by which I mean that a succession of virtual ‘clouds’ will need to exist, each possessing specific characteristics that suit certain types of services. Really it’s no different to what we see with servers today. Adding a hypervisor to a server adds new flexibility, but creating a virtualised environment means much more. What I suspect will happen is that we will start to use vendor specific approaches to building what I will term a physical cloud solution using their technology and approach to supporting a specific objective, but with time we will find these physical clouds will interoperate as a fully virtualised cloud environment." -- Andy Mulholland (tags: entarch enterprisearchitecture cloudcomputing virtualization) @fteter: Highlights From The Bright Lights - Tuesday #c10 Oracle Ace Director Floyd Teter of JPL with one last wrap-up of Collaborate 10. (tags: oracle otn collaborate2010 las vegas) Rittman Mead India – Call for very good Oracle BI Developers/Architects "Now that we have an office in India and if you are interested in joining us, do drop us a line at [email protected], and we will be glad to have technical discussions with you. If you are also an Oracle BI, DW or EPM customer looking for help on projects in the Asia-Pacific region, again we’ll be pleased to hear from you and to let you know how we can help." -- Venkatakrishnan J (tags: otn oracle jobs india developers architects software)

    Read the article

  • What was missing from the Content Strategy Forum?

    - by Roger Hart
    In April, Paris hosted the first ever Content Strategy Forum. The event's website proudly proclaims: 170 attendees, 18 nationalities, 17 speakers, 1 volcano... Content Strategy Forum 2010 rocked the world! The volcano was in Iceland, and the closest we came to rocking the world was a cursory mention in the Huffington Post, but I'll grant the event was awesome. One thing missing from that list, however, is "94 companies" (Plus a couple of universities and freelancers, and what have you). A glance through the attendees directory reveals a fairly wide organisational turnout - 24 students from two Parisian universities, countless design and marketing agencies, a series of tech firms, small and large. Two delegates from IBM, two from ARM, an appearance from RIM, Skype, and Facebook; twelve from the various bits of eBay. Oh, and, err, nobody from Google, Microsoft, Yahoo, Amazon, Play, Twitter, LinkedIn, Craigslist, the BBC, no banks I noticed, and I didn't spot a newspaper. You get the idea. Facebook notwithstanding, you have to scroll through a few pages to Alexa rankings to find company names from the attendee list. I find this interesting, and I'm not wholly sure what to make of it. Of the large, web-centric, content-rich organizations conspicuously absent, at least one of two things is true: They didn't know about the event They didn't care about the event Maybe these guys all have content strategy completely sorted, and it's an utterly naturalised part of their business process. Maybe nobody at say, Apple or Play.com ever publishes a single piece of content that isn't neatly tailored to their (clearly defined, of course) user and business goals. Wouldn't that be lovely? The thing is, in that rosy and beatific world, there's still a case for those folks to join the community. There are bound to be other perspectives, and things to learn. You see, the other thing achingly conspicuous by its absence was case studies. In her keynote address, Kristina Halvorson made the point that what content strategy really needs is some big, loud success stories. A point I'd firmly second as a content strategist working within an organisation. Sarah Cancilla's presentation on content strategy at Facebook included some very neat, specific examples, and was richer for it. It didn't hurt that the example was Facebook - you're getting impressively big numbers off base. What about the other big boys? Is there anybody out there with a perspective? Do we all just look very silly to you, fretting away over text and images and users and purposes? Is content validation and maintenance so accustomed a part of your business that calling attention to it is like sniffing the air and saying "Hmm, a lot of nitrogen about today."? And if it is, do you have any wisdom to share?

    Read the article

  • Add a Cache Clearing Button to Firefox

    - by Asian Angel
    While emptying your browser’s cache may not be something that you need to worry with often or at all there are times when clearing it can be helpful. The Empty Cache Button extension lets you have instant on-demand cache clearing in Firefox. Some reasons why you might want or need to clear your browser’s cache: Clear out older (or out of date) versions of images, etc. from your favorite websites Free up disk space Clearing the cache may help fix browser behavior issues Help protect privacy (i.e. images, etc. displayed within a personal account) Before For our example we loaded three webpages in order to add content to our browser’s cache. Using the “CacheViewer” we were able to easily see the contents of our browser’s cache after the webpages finished loading. What if you need to clear your cache immediately without restarting your browser (if the options are set to empty the cache on browser exit)? Note: CacheViewer is available via a separate extension and can be found here. Empty Cache Button in Action Once you install the extension all that you need to do is right click on any of your browser’s toolbars and select “Customise”. Drag the “Toolbar Button” to an appropriate location in your browser’s UI and you are ready to go. To clear your browser’s cache simply click the button…that is all there is to it. When the cache is empty you will see this small message window appear in the lower right corner of your “Desktop”. Opening up the “CacheViewer” again shows that everything has been cleared out. Terrific! Conclusion If you ever find yourself needing to clear your browser’s cache immediately then the Empty Cache Button extension provides an easy way to do so without restarting your browser (if the options are set to empty the cache on browser exit). Links Download the Empty Cache Button extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Change SuperFetch to Only Cache System Boot Files in VistaTroubleshoot Browsing Issues by Reloading the DNS Client Cache in VistaSearch for Install Packages from the Ubuntu Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedRemove the New Tab Button in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone

    Read the article

  • Instalar SQL Server 2008

    - by Jason Ulloa
    En este post trataré de explicar los pasos para la instalación de SQL y su posterior configuración. Primer paso: Instalación de las reglas de Soporte (Setup Support Rules) Está será la primer pantalla de instalación con la que nos toparemos cuando tratemos de instalar sql server. En ella, únicamente debemos dar clic en siguiente(next). Paso 2: Selección de las características de instalación de SQL Server (Feature Selection) Este es a mi parecer el paso mas importante del proceso de instalación de SQL, pues es el que nos permitirá seleccionar todos los componentes que este tendrá posteriormente Acá lo importante es: Servicios de bases de datos y herramientas de administración. Todas las demás son plus del motor.   Paso 3: Configuración de la Instancia En este paso, no debemos preocuparnos por nada. Únicamente presionamos siguiente. Paso 4: Requerimientos de espacio en disco Nuevamente en esta instancia no tendremos trabajo alguno. Únicamente es una pantalla informativa de SQL en donde se muestra el espacio actual del disco y el espacio que la instalación de SQL Server consumirá. Presionamos siguiente (next). Paso 5: Configuración del servidor Este paso es uno de los mas importantes, pues en el le indicaremos a SQL que usuario utilizará para autenticarse y levantar cada uno de los servicios que hayamos seleccionado al inicio. Generalmente cuando se trabaja en local el usuario NT AUTHORITY\SYSTEM es la mejor opción. Si en este paso, seleccionamos un usuario con permisos insuficientes SQL nos dará un error. Presionamos siguiente (next) Paso 6: Configuración del motor de bases de datos En este paso, nos enfocaremos en la pestaña Account Provisioning, que será en la que le indiquemos el usuario con el que el motor de bases de datos funcionará por defecto. Lo mas recomendado sería hacer clic en la opción add current user, la cual agregará el usuario de windows  que se encuentre en ese momento. También, podremos seleccionar si queremos el modo de autenticación de SQL o el modo Mixto, que incluye autenticación de SQL Server y Windows. Para nuestra instalación seleccionaremos unicamente modo de autenticación de SQL. Una vez que agregamos el usuario presionamos siguiente (next) Paso 7:  Finalizar la configuración Luego de los pasos anteriores, las demás pantallas no requieren nada especial. Únicamente presionar siguiente y esperar a que la instalación de SQL termine.

    Read the article

  • Solution 6 : Kill a Non-Clustered Process during Two-Node Cluster Failover

    - by StanleyGu
    Using Visual Studio 2008 and C#, I developed a windows service A and deployed it to two nodes of a windows server 2008 failover cluster. The service A is part of the failover cluster service, which means, when failover occurs at node1, the cluster service will failover the windows service A from node 1 to node 2. One of the tasks implemented by the windows service A is to start, monitor or kill a process B. The process B is installed to the two nodes but is not part of the failover cluster service. When a failover occurs at node1, the cluster service does not failover the process B from node 1 to node 2, and the process B continues running at node1. The requirement is: When failover occurs at node1, we want the process B running at node1 gets killed, but we do not want the process B be part of the failover cluster service. The first idea that pops up immediately is to put some code in an event handler triggered by the failover in the windows service A. The failover effect to the windows service A is similar to using the task manager to kill the process of the windows service A, but there is no event in windows service that can be triggered by killing the process of the window service. The events related to terminating a windows service are OnStop and OnShutDown, but killing the process of windows service A triggers neither of them. The OnStop event can only be triggered by stopping the windows service using Services Control Manager or Services Management Console. Apparently, the first idea is not feasible. The second idea that emerges is to put code into the OnStart event handler of the windows service A. When failover occurs at node 1, the windows service A is killed at node 1 and started at node 2. During the starting, the windows service A at node 2 kills the process B that is running at node 1. It is a workaround and works very well. The C# code implementation within the OnStart event handler is as following: 1.       Capture server names of the two nodes from App.config 2.       Determine server name of the remote node. 3.       Kill the process B running on the remote node. Check here for sample code.  

    Read the article

  • MVVM Light Toolkit V3 SP1 for Windows Phone 7

    - by Laurent Bugnion
    He he I start to sound like Microsoft… Anyway… I just released a service pack (SP1) for MVVM Light Toolkit V3. Why? Well mostly because I worked a bit more with the Windows Phone 7 tools that were released at MIX0, and I noticed a few things that could be better in the Windows Phone 7 template. Also, I only found out at MIX that you can actually install custom project templates for Visual Studio Express. For some reason I thought it was not possible. The best way to solve these issues is through a service pack, which consists of a few zip files. Simply follow the instructions on the “Installing Manually” page. You can go ahead and overwrite the files that were installed with V3, all the file structure and names are exactly the same. What? So what do you get in this service pack that was not already in V3? (for more info about what’s new in V3, check the What’s New page). Project and Item templates for Visual Studio 10 Express (phone edition). Unzip these files in your “My Documents” folder, and you can now create a new MVVM Light application in the WinPhone7 version of Visual Studio 2010 Express. Signed assemblies: All the assemblies are now signed, which is a requirement in certain build configurations. XML documentation files: Thanks to Matt Casto for pinging me and reminding me that I had forgotten to include them (doh). New and improved Windows Phone 7 assemblies and templates: This one deserves its own section (see below). What was wrong with the old Silverlight 3 assemblies in Windows Phone 7 projects? It was kind of weird. Functionality wise, it was working just right. However, if you noticed, the EventToCommand behavior was not visible in the Assets tab of Expression Blend, under Behaviors, where it should normally have been. The reason was that even though the Windows Phone 7 is using Silverlight 3, the System.Windows.Interactivity that Blend was expecting is the version that is normally used in Silverlight 4. Yeah, I know, it’s weird. This led me to create a specific version of these assemblies for the phone. The assemblies are located into C:\Program Files\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\Binaries\WP7. There are 3 DLLs: GalaSoft.MvvmLight.WP7.dll with RelayCommand, Messenger and ViewModelBase GalaSoft.MvvmLight.Extras.WP7.dll with EventToCommand and DispatcherHelper System.Windows.Interactivity.dll which is the same DLL installed in the Blend SDK, and which is needed for the EventToCommand behavior to work. Happy coding! That’s all! Download and install the service pack according to the instructions on the Installation page, and create your first MVVM Light application for the phone (a blog post will follow later with more details).   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How to Assign a Static IP to an Ubuntu 10.04 Desktop Computer

    - by Mysticgeek
    If you have a home network with several computers, assigning them static IP addresses can make troubleshooting easier. Today we take a look at switching from DHCP to a static IP in Ubuntu. Assign a Static IP Using Static IPs prevents address conflicts between machines and can allow easier access to them. If you have a small home network and are satisfied with the machines getting their IP address automatically via DHCP, there won’t be anything gained by using static addresses. Using Static IPs isn’t necessarily for the average user, but if you’re a geek who wants to know the address assigned to each machine, it can allow for faster troubleshooting.  To change your Ubuntu machine to a Static IP go to System \ Preferences \ Network Connections. In our example, we’re on a wired system so click on the Wired tab, then select Auto eth0 and click on Edit. Select the IPv4 settings tab, change Method to Manual, click the Add button. Then type in the Static IP Address, Subnet Mask, DNS Servers, and Default Gateway. Then click Apply when you’re finished. Make sure to hit Enter after typing in the Default Gateway otherwise it will revert back to 0.0.0.0 You’ll need to enter in your admin password before the changes go into affect. To verify the changes have been made successfully launch a Terminal session and type in ifconfig at the command prompt, or follow these directions. You also might want to ping the address from another machine to make sure everything is communicating. If you want to assign a Static IP to your Windows machines, check out our article on how to assign a Static IP on Windows systems (make sure to browse the comments as our readers have some good suggestions).  Whether you have a small office or home network set up with a server and several machines, using a Static IP on each device can help you manage them easily. Again, it isn’t for everyone as it really depends on how your network is setup and the way you use it. Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressAllow Remote Control To Your Desktop On UbuntuAssign Custom Shortcut Keys on Ubuntu LinuxKeyboard Ninja: 21 Keyboard Shortcut ArticlesChange Ubuntu Server from DHCP to a Static IP Address TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries

    Read the article

  • Add Global Hotkeys to Windows Media Player

    - by DigitalGeekery
    Do you use Windows Media Player in the background while working in other applications? The WMP Keys plug-in for Media Player adds global keyboard shortcuts that allow you to control Media Player even when it isn’t in focus. Windows Media Player has a slew of keyboard shortcuts that work only when the media player is active, but these shortcuts stop working once WMP is no longer in focus or minimized. WMP Keys add the following default global hotkeys for Windows Media Player 10, 11, and 12. Ctrl+Alt+Home – Play / Pause Ctrl+Alt+Right – Next track Ctrl+Alt+Left – Previous track Ctrl+Alt+Up Arrow Key – Volume Up Ctrl+Alt+Down Arrow Key – Volume Down Ctrl+Alt+F – Fast Forward Ctrl+Alt+B – Fast Backward Ctrl+Alt+[1-5] – Rate 1-5 stars Note: Tapping Ctrl+Alt+F and Ctrl+Alt+B will skip ahead or back in 5 second intervals. Close out of Windows Media Player and then download and install WMP Keys (link below). After you’ve installed WMP Keys, you’ll need to enable it. Select Organize and then Options… In the Options window, select the Plug-ins tab, click Background in the Category window, then check the box for Wmpkeys Plugin. Click OK to save and exit. You can also enable the plug-in by selecting Tools > Plug-ins and clicking Wmpkeys Plugin. You to view and edit the global hotkeys in the WMPKeys settings window. Select Tools > Plug-in properties and click Wmpkeys Plugin. Below you can see all the default WMP Keys shortcuts.   To change any of the shortcuts, select the text box then press the new keyboard shortcut. Click OK when finished. WMP Keys is very simple little plug-in that makes using WMP while you’re multitasking just a little bit easier and more efficient.  Looking for more plugins for Windows Media Player? Check out our previous articles on adding new features with Media Player Plus, and displaying song lyrics with Lyrics Plugin. Download WMP Keys Similar Articles Productive Geek Tips Built-in Quick Launch Hotkeys in Windows VistaFixing When Windows Media Player Library Won’t Let You Add FilesKantaris is a Unique Media Player Based on VLCInstall and Use the VLC Media Player on Ubuntu LinuxAssign Keyboard Media Keys to Work in Winamp TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >