Search Results

Search found 9273 results on 371 pages for 'complex strings'.

Page 331/371 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Oracle Delivers Latest Release of Oracle Enterprise Manager 12c

    - by Scott McNeil
    Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption News Summary IT organizations are adopting private clouds as a stepping-stone to business-driven, self-service IT. Successful implementations hinge on the ability to efficiently deploy and manage cloud services at enterprise scale. Having a complete cloud management solution integrated with an enterprise-class technology stack is a fundamental requirement for IT. Oracle Enterprise Manager 12c Release 4 meets that requirement by helping businesses become more agile and responsive, while reducing cost, complexity, and risk. News Facts Oracle Enterprise Manager 12c Release 4, available today, lets organizations rapidly adopt Oracle-based, enterprise-scale private clouds. New capabilities provide advanced technology stack management, secure database administration, and enterprise service governance, enabling Oracle customers and partners to maximize database and application performance and drive innovation using self-service IT platforms. The enhancements have been driven by customers and the growing Oracle Enterprise Manager Ecosystem, comprised of more than 750 Oracle PartnerNetwork (OPN) Specialized partners. Oracle and its partners and customers have built over 140 plug-ins and connectors for Oracle Enterprise Manager. Watch the video highlights. Automation for Broader Cloud Services Oracle Enterprise Manager 12c Release 4 allows for a rapid enterprise-wide adoption of database, middleware and infrastructure services in the private cloud, driven by an enhanced API-enabled service catalog. The release features “push button” style provisioning of complete environments such as SOA and Oracle Active Data Guard, and fast data cloning that enables rapid deployment and testing of enterprise applications. Out-of-the-box capabilities to detect data and configuration vulnerabilities provide enhanced cloud service governance along with greater operational control through a flexible and extensible showback mechanism. Enhanced Database Management A new performance warehouse enables predictive database diagnostics and trend analysis and helps identify database problems before they occur. New enterprise data-governance capabilities enhance security by helping systematically discover and protect sensitive data. Step-by-step orchestration of upgrades with the ability to rollback changes enables faster adoption of Oracle Database 12c. Expanded Fusion Middleware Management A new consolidated view of Oracle Fusion Middleware 12c deployments with a guided management capability lets administrators apply best management practices to diverse middleware environments and identify performance issues quickly. A Java VM Diagnostics as a Service feature allows governed access to diagnostics data for IT workers across multiple disciplines for accelerated DevOps resolutions of defects and performance optimization. New automated provisioning for SOA lets middleware administrators perform mass SOA provisioning with ease. Superior Enterprise-Grade Management Private roles and preferred credentials have been added to Oracle Enterprise Manager to provide additional fine-grained security for organizations with complex access control requirements. A new security console provides a single point of control for managing the security of Oracle Enterprise Manager environments. Support for the latest industry standard SNMP v3 protocol, including encryption, enables more secure heterogeneous management. “Smart monitoring” adapts to observed environmental changes and adds self-management capabilities to help Oracle Enterprise Manager run at peak performance, while demanding less IT supervision. Supporting Quotes “Lawrence Livermore National Laboratory has a strong tradition of technology breakthroughs and leadership. As a member of Oracle’s Customer Advisory Board for Oracle Enterprise Manager, we have consistently provided feedback and guidance in the areas of enterprise-scale cloud, self-diagnosability, and secure administration for the product,” said Tim Frazier, CIO, NIF and Photon Sciences, Lawrence Livermore National Laboratory. “We intend to take advantage of the Release 4 features that support enterprise-scale availability and fine-grained security capabilities for private cloud deployments.” “IDC's most recent CloudTrack survey shows that most enterprises plan to adopt hybrid cloud architectures over the next three years,” said Mary Johnston Turner, Research Vice President, Enterprise System Management Software, IDC. “These organizations plan to deploy a wide range of workloads into cloud environments including mission critical database and middleware services that require high levels of fault tolerance and disaster recovery. Such capabilities were traditionally custom configured for each application but cloud offers the possibility to incorporate such properties within the service definition, enabling organizations to adopt cloud without compromise. With the latest release of Oracle Enterprise Manager 12c, Oracle is providing customers with an out-of-the-box experience for delivering highly-resilient cloud services for databases and applications.” “Since its inception, Oracle has been leading the way in innovative, scalable and high performance solutions for the enterprise. With this release of Oracle Enterprise Manager, we are extending this leadership by providing enterprise-scale capabilities for planning, delivering, and managing private clouds. We call this ‘zero-to-cloud – accelerated.’ These enhancements help our customers to expedite their adoption of cloud computing and prepares them for the next generation of self-service IT,” said Prakash Ramamurthy, senior vice president of Systems and Cloud Management at Oracle. Supporting Resources Oracle Enterprise Manager 12c Video: Cerner Delivers High Performance Private Cloud Video: BIAS Achieves Outstanding Results with Private Cloud Press Release Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Integration with Multiple Versions of BizTalk HL7 Accelerator Schemas

    - by Paul Petrov
    Microsoft BizTalk Accelerator for HL7 comes with multiple versions of the HL7 implementation. One of the typical integration tasks is to receive one format and transmit another. For example, system A works HL7 v2.4 messages, system B with v2.3, and system C with v2.2. The system A is exchanging messages with B and C. The logical solution is to create schemas in separate namespaces for each system and assign maps on send ports. Schematic diagram of the messaging solution is shown below:   Nothing is complex about that conceptually. On the implementation level things can get nasty though because of the elaborate nature of HL7 schemas and sheer amount of message types involved. If trying to implement maps directly in BizTalk Map Editor one would quickly get buried by thousands of links between subfields of HL7 segments. Since task is repetitive because HL7 segments are reused between message types it's natural to take advantage of such modular structure and reduce amount of work through reuse. Here's where it makes sense to switch from visual map editor to old plain XSLT. The implementation is done in three steps. First, create XSL templates to map from segments of one version to another. This can be done using BizTalk Map Editor subsequently copying and modifying generated XSL code to create one xsl:template per segment. Group all segments for format mapping in one XSL file (we call it SegmentTemplates.xsl). Here's how template for the PID segment (Patient Identification) would look like this: <xsl:template name="PID"> <PID_PatientIdentification> <xsl:if test="PID_PatientIdentification/PID_1_SetIdPatientId"> <PID_1_SetIdPid> <xsl:value-of select="PID_PatientIdentification/PID_1_SetIdPatientId/text()" /> </PID_1_SetIdPid> </xsl:if> <xsl:for-each select="PID_PatientIdentification/PID_2_PatientIdExternalId"> <PID_2_PatientId> <xsl:if test="CX_0_Id"> <CX_0_Id> <xsl:value-of select="CX_0_Id/text()" /> </CX_0_Id> </xsl:if> <xsl:if test="CX_1_CheckDigit"> <CX_1_CheckDigitSt> <xsl:value-of select="CX_1_CheckDigit/text()" /> </CX_1_CheckDigitSt> </xsl:if> <xsl:if test="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed"> <CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> <xsl:value-of select="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed/text()" /> </CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> . . . // skipped for brevity This is the most tedious and time consuming part. Templates can be created for only those segments that are used in message interchange. Once this is done the rest goes much easier. The next step is to create message type specific XSL that references (imports) segment templates XSL file. Inside this file simple call segment templates in appropriate places. For example, beginning of the mapping XSL for ADT_A01 message would look like this:   <xsl:import href="SegmentTemplates_23_to_24.xslt" />  <xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />   <xsl:template match="/">    <xsl:apply-templates select="s0:ADT_A01_23_GLO_DEF" />  </xsl:template>   <xsl:template match="s0:ADT_A01_23_GLO_DEF">    <ns0:ADT_A01_24_GLO_DEF>      <xsl:call-template name="EVN" />      <xsl:call-template name="PID" />      <xsl:for-each select="PD1_PatientDemographic">        <xsl:call-template name="PD1" />      </xsl:for-each>      <xsl:call-template name="PV1" />      <xsl:for-each select="PV2_PatientVisitAdditionalInformation">        <xsl:call-template name="PV2" />      </xsl:for-each> This code simply calls segment template directly for required singular elements and in for-each loop for optional/repeating elements. And lastly, create BizTalk map (btm) that references message type specific XSL. It is essentially empty map with Custom XSL Path set to appropriate XSL: In the end, you will end up with one segment templates file that is referenced by many message type specific XSL files which in turn used by BizTalk maps. Once all segment maps are created they are widely reusable and all the rest work is very simple and clean.

    Read the article

  • problem occur during installation of moses scripts

    - by lenny99
    we got error when compile moses-script. process of it as follows: minakshi@minakshi-Vostro-3500:~/Desktop/monu/moses/scripts$ make release # Compile the parts make all make[1]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts' # Building memscore may fail e.g. if boost is not available. # We ignore this because traditional scoring will still work and memscore isn't used by default. cd training/memscore ; \ ./configure && make \ || ( echo "WARNING: Building memscore failed."; \ echo 'training/memscore/memscore' >> ../../release-exclude ) checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for g++... g++ checking whether the C++ compiler works... yes checking for C++ compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for style of include used by make... GNU checking dependency style of g++... gcc3 checking for gcc... gcc checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for boostlib >= 1.31.0... yes checking for cos in -lm... yes checking for gzopen in -lz... yes checking for cblas_dgemm in -lgslcblas... no checking for gsl_blas_dgemm in -lgsl... no checking how to run the C++ preprocessor... g++ -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking n_gram.h usability... no checking n_gram.h presence... no checking for n_gram.h... no checking for size_t... yes checking for ptrdiff_t... yes configure: creating ./config.status config.status: creating Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make all-am make[3]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make[3]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/memscore' touch release-exclude # No files excluded by default pwd=`pwd`; \ for subdir in cmert-0.5 phrase-extract symal mbr lexical-reordering; do \ make -C training/$subdir || exit 1; \ echo "### Compiler $subdir"; \ cd $pwd; \ done make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/cmert-0.5' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/cmert-0.5' ### Compiler cmert-0.5 make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/phrase-extract' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/phrase-extract' ### Compiler phrase-extract make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/symal' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/symal' ### Compiler symal make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/mbr' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/mbr' ### Compiler mbr make[2]: Entering directory `/home/minakshi/Desktop/monu/moses/scripts/training/lexical-reordering' make[2]: Nothing to be done for `all'. make[2]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts/training/lexical-reordering' ### Compiler lexical-reordering ## All files that need compilation were compiled make[1]: Leaving directory `/home/minakshi/Desktop/monu/moses/scripts' /bin/sh: ./check-dependencies.pl: not found make: *** [release] Error 127 We don't know why this error occurs? check-dependencies.pl file existed in scripts folder ...

    Read the article

  • How to Use Steam In-Home Streaming

    - by Chris Hoffman
    Steam’s In-Home Streaming is now available to everyone, allowing you to stream PC games from one PC to another PC on the same local network. Use your gaming PC to power your laptops and home theater system. This feature doesn’t allow you to stream games over the Internet, only the same local network. Even if you tricked Steam, you probably wouldn’t get good streaming performance over the Internet. Why Stream? When you use Steam In-Home streaming, one PC sends its video and audio to another PC. The other PC views the video and audio like it’s watching a movie, sending back mouse, keyboard, and controller input to the other PC. This allows you to have a fast gaming PC power your gaming experience on slower PCs. For example, you could play graphically demanding games on a laptop in another room of your house, even if that laptop has slower integrated graphics. You could connect a slower PC to your television and use your gaming PC without hauling it into a different room in your house. Streaming also enables cross-platform compatibility. You could have a Windows gaming PC and stream games to a Mac or Linux system. This will be Valve’s official solution for compatibility with old Windows-only games on the Linux (Steam OS) Steam Machines arriving later this year. NVIDIA offers their own game streaming solution, but it requires certain NVIDIA graphics hardware and can only stream to an NVIDIA Shield device. How to Get Started In-Home Streaming is simple to use and doesn’t require any complex configuration — or any configuration, really. First, log into the Steam program on a Windows PC. This should ideally be a powerful gaming PC with a powerful CPU and fast graphics hardware. Install the games you want to stream if you haven’t already — you’ll be streaming from your PC, not from Valve’s servers. (Valve will eventually allow you to stream games from Mac OS X, Linux, and Steam OS systems, but that feature isn’t yet available. You can still stream games to these other operating systems.) Next, log into Steam on another computer on the same network with the same Steam username. Both computers have to be on the same subnet of the same local network. You’ll see the games installed on your other PC in the Steam client’s library. Click the Stream button to start streaming a game from your other PC. The game will launch on your host PC, and it will send its audio and video to the PC in front of you. Your input on the client will be sent back to the server. Be sure to update Steam on both computers if you don’t see this feature. Use the Steam > Check for Updates option within Steam and install the latest update. Updating to the latest graphics drivers for your computer’s hardware is always a good idea, too. Improving Performance Here’s what Valve recommends for good streaming performance: Host PC: A quad-core CPU for the computer running the game, minimum. The computer needs enough processor power to run the game, compress the video and audio, and send it over the network with low latency. Streaming Client: A GPU that supports hardware-accelerated H.264 decoding on the client PC. This hardware is included on all recent laptops and PCs. Ifyou have an older PC or netbook, it may not be able to decode the video stream quickly enough. Network Hardware: A wired network connection is ideal. You may have success with wireless N or AC networks with good signals, but this isn’t guaranteed. Game Settings: While streaming a game, visit the game’s setting screen and lower the resolution or turn off VSync to speed things up. In-Home Steaming Settings: On the host PC, click Steam > Settings and select In-Home Streaming to view the In-Home Streaming settings. You can modify your streaming settings to improve performance and reduce latency. Feel free to experiment with the options here and see how they affect performance — they should be self-explanatory. Check Valve’s In-Home Streaming documentation for troubleshooting information. You can also try streaming non-Steam games. Click Games > Add a Non-Steam Game to My Library on your host PC and add a PC game you have installed elsewhere on your system. You can then try streaming it from your client PC. Valve says this “may work but is not officially supported.” Image Credit: Robert Couse-Baker on Flickr, Milestoned on Flickr

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • Blazing fast performance with RadGridView for WPF 4.0 and Entity Framework 4.0

    Just before our upcoming release of Q1 2010 SP1 (early next week), Ive decided to check how RadGridView for WPF will handle complex Entity Framework 4.0 query with almost 2 million records: public class MyDataContext{    IQueryable _Data;    public IQueryable Data    {        get        {            if (_Data == null)            {                var northwindEntities = new NorthwindEntities();                var queryable = from o in northwindEntities.Orders                               from od in northwindEntities.Order_Details                                select new                                {                                    od.OrderID,                                    od.ProductID,                                    od.UnitPrice,                                    od.Quantity,                                    od.Discount,                                    o.CustomerID,                                    o.EmployeeID,                                    o.OrderDate                                };                _Data = queryable.OrderBy(i => i.OrderID);            }             return _Data;        }    }} The grid is bound completely codeless in XAML using RadDataPager with PageSize set to 50: <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:telerik="http://schemas.telerik.com/2008/xaml/presentation" Title="MainWindow" mc...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #006

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 This was my very first year of blogging so I was every day learning something new. As I have said many times, that blogging was never an intention. I had really not understood what exactly I am working on or beginning when I was beginning blogging in 2006. I had never knew that my life was going to change forever, once I started blogging. When I look back all of this year, I am happy that we are here together. 2007 IT Outsourcing to India – Top 10 Reasons Companies Outsource Outsourcing is about trust, collaboration and success. Helping other countries in need has been always the course of mankind, outsourcing is nothing different then that. With information technology and process improvements increasing the complexity, costs and skills required to accomplish routine tasks as well as challenging complex tasks, companies are outsourcing such tasks to providers who have the expertise to perform them at lower costs , with greater value and quality outcome. UDF – Remove Duplicate Chars From String This was a very interesting function I wrote in my early career. I am still using this function when I have to remove duplicate chars from strings. I have yet to come across a scenario where it does not work so I keep on using it till today. Please leave a comment if there is any better solution to this problem. FIX : Error : 3702 Cannot drop database because it is currently in use This is a very generic error when DROP Database is command is executed and the database is not dropped. The common mistake user is kept the connection open with this database and trying to drop the database. The database cannot be dropped if there is any other connection open along with it. It is always a good idea to take database in single user mode before dropping it. Here is the quick tutorial regarding how to bring the database in single user mode: Using T-SQL | Using SSMS. 2008 Install SQL Server 2008 – How to Upgrade to SQL Server 2008 – Installation Tutorial This was indeed one of the most popular articles in SQL Server 2008. Lots of people wanted to learn how to install SQL SErver 2008 but they were facing various issues while installation. I build this tutorial which becomes reference points for many. Default Collation of SQL Server 2008 What is the collation of SQL Server 2008 default installations? I often see this question confusing many experienced developers as well. Well the answer is in following image. Ahmedabad SQL Server User Group Meeting – November 2008 User group meetings are fun, now a days I am going to User Group meetings every week but there was a case when I have been just a beginner on this subject. The bug of the community was caught on me years ago when I started to present in Ahmedabad and Gandhinagar SQ LServer User Groups. 2009 Validate an XML document in TSQL using XSD My friend Jacob Sebastian wrote an excellent article on the subject XML and XSD. Because of the ‘eXtensible’ nature of XML (eXtensible Markup Language), often there is a requirement to restrict and validate the content of an XML document to a pre-defined structure and values. XSD (XML Schema Definition Language) is the W3C recommended language for describing and validating XML documents. SQL Server implements XSD as XML Schema Collections. Star Join Query Optimization At present, when queries are sent to very large databases, millions of rows are returned. Also the users have to go through extended query response times when joining multiple tables are involved with such queries. ‘Star Join Query Optimization’ is a new feature of SQL Server 2008 Enterprise Edition. This mechanism uses bitmap filtering for improving the performance of some types of queries by the effective retrieval of rows from fact tables. 2010 These puzzles are very interesting and intriguing – there was lots of interest on this subject. If you have free time this weekend. You may want to try them out. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint (Solution)  SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal (Solution)  SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists (Open)  Additionally, I had great fun presenting SQL Server Performance Tuning seminar at fantastic locations in Hyderabad. Installing AdventeWorks Database This has been the most popular request I have received on my blog. Here is the quick video about how one can install AdventureWorks. 2011 Effect of SET NOCOUNT on @@ROWCOUNT There was an interesting incident once while I was presenting a session. I wrote a code and suddenly 10 hands went up in the air.  This was a bit surprise to me as I do not know why they all got alerted. I assumed that there should be something wrong with either project, screen or my display. However the real reason was very interesting – I suggest you read the complete blog post to understand this interesting scenario. Error: Deleting Offline Database and Creating the Same Name This is very interesting because once a user deletes the offline database the MDF and LDF file still exists and if the user attempts to create a new database with the same name it will give error. I found this very interesting and the blog explains the concept very quickly. Have you ever faced a similar situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Visual Studio Little Wonders: Box Selection

    - by James Michael Hare
    So this week I decided I’d do a Little Wonder of a different kind and focus on an underused IDE improvement: Visual Studio’s Box Selection capability. This is a handy feature that many people still don’t realize was made available in Visual Studio 2010 (and beyond).  True, there have been other editors in the past with this capability, but now that it’s fully part of Visual Studio we can enjoy it’s goodness from within our own IDE. So, for those of you who don’t know what box selection is and what it allows you to do, read on! Sometimes, we want to select beyond the horizontal… The problem with traditional text selection in many editors is that it is horizontally oriented.  Sure, you can select multiple rows, but if you do you will pull in the entire row (at least for the middle rows).  Under the old selection scheme, if you wanted to select a portion of text from each row (a “box” of text) you were out of luck.  Box selection rectifies this by allowing you to select a box of text that bounded by a selection rectangle that you can grow horizontally or vertically.  So let’s think a situation that could occur where this comes in handy. Let’s say, for instance, that we are defining an enum in our code that we want to be able to translate into some string values (possibly to be stored in a database, output to screen, etc.). Perhaps such an enum would look like this: 1: public enum OrderType 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: } 8:  Now, let’s say we are in the process of creating a Dictionary<K,V> to translate our OrderType: 1: var translator = new Dictionary<OrderType, string> 2: { 3: // do I really want to retype all this??? 4: }; Yes the example above is contrived so that we will pull some garbage if we do a multi-line select. I could select the lines above using the traditional multi-line selection: And then paste them into the translator code, which would result in this: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: }; But I have a lot of junk there, sure I can manually clear it out, or use some search and replace magic, but if this were hundreds of lines instead of just a few that would quickly become cumbersome. The Box Selection Now that we have the ability to create box selections, we can select the box of text to delete!  Most of us are familiar with the fact we can drag the mouse (or hold [Shift] and use the arrow keys) to create a selection that can span multiple rows: Box selection, however, actually allows us to select a box instead of the typical horizontal lines: Then we can press the [delete] key and the pesky comments are all gone! You can do this either by holding down [Alt] while you select with your mouse, or by holding down [Alt+Shift] and using the arrow keys on the keyboard to grow the box horizontally or vertically. So now we have: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, 4: Sell, 5: Exchange, 6: Cancel, 7: }; Which is closer, but we still need an opening curly, the string to translate to, and the closing curly and comma. Fortunately, again, this is easy with box selections due to the fact box selection can even work for a zero-width selection! That is, hold down [Alt] and either drag down with no width, or hold down [Alt+Shift] and arrow down and you will define a selection range with no width, essentially, a vertical line selection: Notice the faint selection line on the right? So why is this useful? Well, just like with any selected range, we can type and it will replace the selection. What does this mean for box selections? It means that we can insert the same text all the way down on each line! If we have the same selection above, and type a curly and a space, we’d get: Imagine doing this over hundreds of lines and think of what a time saver it could be! Now make a zero-width selection on the other side: And type a curly and a comma, and we’d get: So close! Now finally, imagine we’ve already defined these strings somewhere and want to paste them in: 1: const private string BuyText = "Buy Shares"; 2: const private string SellText = "Sell Shares"; 3: const private string ExchangeText = "Exchange"; 4: const private string CancelText = "Cancel"; We can, again, use our box selection to pull out the constant names: And clicking copy (or [CTRL+C]) and then selecting a range to paste into: And finally clicking paste (or [CTRL+V]) to get the final result: 1: var translator = new Dictionary<OrderType, string> 2: { 3: { Buy, BuyText }, 4: { Sell, SellText }, 5: { Exchange, ExchangeText }, 6: { Cancel, CancelText }, 7: };   Sure, this was a contrived example, but I’m sure you’ll agree that it adds myriad possibilities of new ways to copy and paste vertical selections, as well as inserting text across a vertical slice. Summary: While box selection has been around in other editors, we finally get to experience it in VS2010 and beyond. It is extremely handy for selecting columns of information for cutting, copying, and pasting. In addition, it allows you to create a zero-width vertical insertion point that can be used to enter the same text across multiple rows. Imagine the time you can save adding repetitive code across multiple lines!  Try it, the more you use it, the more you’ll love it! Technorati Tags: C#,CSharp,.NET,Visual Studio,Little Wonders,Box Selection

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • SQLAuthority News – #TechEDIn – TechEd India 2012 – Things to Do and Explore for SQL Enthusiast

    - by pinaldave
    TechEd India 2012 is just 48 hours away and I have been receiving lots of requests regarding how SQL enthusiasts can maximize their time they’ll be spending at TechEd India 2012. Trust me – TechEd is the biggest Tech Event in India and it is much larger in magnitude than we can imagine. There are plenty of tracks there and lots of things to do. Honestly, we need clone ourselves multiple times to completely cover the event. However, I am going to talk about SQL enthusiasts only right now. In this post, I’ll share a few things they can do in this big event. But before I start talking about specific things, there is one thing which is a must – Keynote. There are amazing Keynotes planned every single day at TechEd India 2012. One should not miss them at all. Social Media I am a big believer of the social media. I am everywhere - Twitter, Facebook, LinkedIn and GPlus. I suggest you follow the tag #TechEdIn as well as contribute at the healthy conversation going on right now. You may want to follow a few of the SQL Server enthusiasts who are also attending events like TechEd India. This way, you will know where they are and you can contribute along with them. For a good start, you can follow all the speakers who are presenting at the event. I have linked all the speakers’ names with their respective Twitter accounts. Networking Do not stop meeting new people. Introduce yourself. Catch the speakers after their sessions. Meet other SQL experts and discuss SQL as well as life aside SQL. The best way to start the communication is to talk about something new. Here are a few lines I usually use when I have to break the ice: SQL Server 2012 is just released and I have installed it. How many SQL Server sessions are you going to attend? I am going to attend _________ I am a big fan of SQL Server. Sessions Agenda Day 1 T-SQL Rediscovered with SQL Server 2012 - Jacob Sebastian Catapult your data with SQL Server 2012 integration services - Praveen Srivatsa Processing Big Data with SQL Server 2012 and Hadoop  - Stephan Forte SQL Server Misconceptions and Resolution – A Practical Perspective – Pinal Dave and Vinod Kumar Securing with ContainedDB in SQL Server 2012  - Pranab Majumdar Agenda Day 2 Hand-on-Lab – Exploring Power View with SQL Server 2012 – Ravi S. Maniam Hand-on-Lab - SQL Server 2012 – AlwaysOn Availability Groups  - Amit Ganguli Agenda Day 3 Peeling SQL Server like an Onion: Internals Debunked  - Vinod Kumar Speed Up! – Parallel Processes and Unparalleled Performance  - Pinal Dave Keeping Your Database Available – ‘AlwaysOn’  - Balmukund Lakhani Lesser Known Facts of SQL Server Backup and Restore  - Amit Banerjee Top five reasons why you want SQL Server 2012 BI - Praveen Srivatsa Product Booth and Event Partners There will be a dedicated SQL Server booth at the event. I suggest you stop by there and do communication with SQL Server Experts. Additionally there will be booths of various event partners. Stop by their booth and see if they have a product which can help your career. I know that Pluralsight has recently released my course on their online learning site and if that interests you, you can talk about the subject with them. Bring Your Camera Make a list of the people you want to meet. Follow them on Twitter or send them an email and know their location. Introduce yourself, meet them and have your conversation. Do not forget to take a photo with them and later on, share the photo on social media. It would be nice to send an email to everyone with attached high resolution images if you have their email address. After-hours parties After-hours parties are not always about eating and meeting friends but sometimes, they are very informative. Last time I ended up meeting an SQL expert, and we end up talking for long hours on various aspects of SQL Server. After 4 hours, we figured out that he stays in the same apartment complex as mine and since we have had an excellent friendship, he has then become our family friend. So, my advice is that you start to seek out who is meeting where in the evening and see if you can get invited to the parties. Make new friends but never lose mutual respect by doing something silly. Meet Me I will be at the event for three days straight. I will be around the SQL tracks. Please stop by and introduce yourself. I would like to meet you and talk to you. Meeting folks from the Community is very important as we all speak the same language at the end of the day – SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • MySQL Connect 9 Days Away – Optimizer Sessions

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following my previous blog post focusing on InnoDB talks at MySQL Connect, let us review today the sessions focusing on the MySQL Optimizer: Saturday, 11.30 am, Room Golden Gate 6: MySQL Optimizer Overview—Olav Sanstå, Oracle The goal of MySQL optimizer is to take a SQL query as input and produce an optimal execution plan for the query. This session presents an overview of the main phases of the MySQL optimizer and the primary optimizations done to the query. These optimizations are based on a combination of logical transformations and cost-based decisions. Examples of optimization strategies the presentation covers are the main query transformations, the join optimizer, the data access selection strategies, and the range optimizer. For the cost-based optimizations, an overview of the cost model and the data used for doing the cost estimations is included. Saturday, 1.00 pm, Room Golden Gate 6: Overview of New Optimizer Features in MySQL 5.6—Manyi Lu, Oracle Many optimizer features have been added into MySQL 5.6. This session provides an introduction to these great features. Multirange read, index condition pushdown, and batched key access will yield huge performance improvements on large data volumes. Structured explain, explain for update/delete/insert, and optimizer tracing will help users analyze and speed up queries. And last but not least, the session covers subquery optimizations in Release 5.6. Saturday, 7.00 pm, Room Golden Gate 4: BoF: Query Optimizations: What Is New and What Is Coming? This BoF presents common techniques for query optimization, covers what is new in MySQL 5.6, and provides a discussion forum in which attendees can tell the MySQL optimizer team which optimizations they would like to see in the future. Sunday, 1.15 pm, Room Golden Gate 8: Query Performance Comparison of MySQL 5.5 and MySQL 5.6—Øystein Grøvlen, Oracle MySQL Release 5.6 contains several improvements in the query optimizer that create improved performance for complex queries. This presentation looks at how MySQL 5.6 improves the performance of many of the queries in the DBT-3 benchmark. Based on the observed improvements, the presentation discusses what makes the specific queries perform better in Release 5.6. It describes the relevant new optimization techniques and gives examples of the types of queries that will benefit from these techniques. Sunday, 4.15 pm, Room Golden Gate 4: Powerful EXPLAIN in MySQL 5.6—Evgeny Potemkin, Oracle The EXPLAIN command of MySQL has long been a very useful tool for understanding how MySQL will execute a query. Release 5.6 of the MySQL database offers several new additions that give more-detailed information about the query plan and make it easier to understand at the same time. This presentation gives an overview of new EXPLAIN features: structured EXPLAIN in JSON format, EXPLAIN for INSERT/UPDATE/DELETE, and optimizer tracing. Examples in the session give insights into how you can take advantage of the new features. They show how these features supplement and relate to each other and to classical EXPLAIN and how and why the MySQL server chooses a particular query plan. You can check out the full program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

    - by Brian
    Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds. This world record is the first to run online and batch workloads concurrently. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload. Performance Landscape PeopleSoft HR Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-2 (db) 18,000 0.944 0.503 43.32 64 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory 5 x 300 GB SAS internal disks 1 x 100 GB and 2 x 300 GB internal SSDs 2 x 10 Gbe HBA Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 2 x 300 GB SAS internal disks 1 x 100 GB internal SSD Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Management oracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

    Read the article

  • BIP and Mapviewer Mash Up I

    - by Tim Dexter
    I was out in Yellowstone last week soaking up various wildlife and a bit too much rain ... good to be back until the 95F heat yesterday. Taking a little break from the Excel templates; the dev folks are planing an Excel patch in the next week or so that will add a mass of new functionality. At the risk of completely mis leading you I'm going to hang back a while. What I have written so far holds true and will continue to do so. This week, I have been mostly eating 'mapviewer' ... answers on a post card please, TV show and character. I had a request to show how BIP can call mapviewer and render a dynamic map in an output. So I hit the books and colleagues for some answers. Mapviewer is Oracle's geographic information system, hereby known as GIS. I use it a lot in our BIEE demos where the interaction with the maps is very impressive. Need a map of California and its congressional districts? I have contacts; Jerry and David with their little black box of maps. Once in my possession I can build highly interactive, clickable maps that allow the user to drill into more information using a very friendly interface driving BIEE content and navigation. But what about maps in BIP output? Bryan Wise, who has written some articles on this blog did some work a while back with the PL/SQL API interface. The extract for the report called a function that in turn called the mapviewer server, passing a set of mapping requirements, it then returned a URL to a cached copy of that map. Easy to then have BIP render that image. Thats still very doable. You need to install a couple of packages and then load the mapviewer java APIs into the database. Then you can write your function to the APIs. A little involved? Maybe, but the database is doing all the heavy lifting for you. I thought I would investigate another method for getting the maps back into BIP. There is a URL interface you can call, this involves building an XML message to be passed to the mapviewer server. It's pretty straightforward to use on the mapviewer side. On the BIP side things are little more tricksy. After some unexpected messing about I finally got the ubiquitous Hello World map to render using the URL method. Not the most exciting map in the world, lots of ocean and a rather long URL to get it to render. http://127.0.0.1:9704/mapviewer/omserver?xml_request=%3Cmap_request%20title=%22Hello%20World%22%20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E Notice all of the encoding in the URL string to handle the spaces, quotes, etc. All necessary to get BIP to make the call to the mapviewer server correctly without truncating the URL if it hits a real space rather than a %20. With that in mind constructing the URL was pretty simple. I'm not going to get into the content of the URL too much, for that you need to bone up on the mapviewer XML API. Check out the home page here and the documentation here. To make the template portable I used the standard CURRENT_SERVER_URL parameter from the BIP server and declared that in my template. <?param@begin:CURRENT_SERVER_URL;'myserver'?> Ignore the 'myserver', that was just a dummy value for testing at runtime it will resolve to: 'http://yourserver:port/xmlpserver' Not quite what we need as mapviewer has its own server path, in my case I needed 'mapviewer/omserver?xml_request=' as the fixed path to the mapviewer request URL. A little concatenation and substringing later I came up with <?param@begin:mURL;concat(substring($CURRENT_SERVER_URL,1,22),'mapviewer/omserver?xml_request=')?> Thats the basic URL that I can then build on. To get the Hello World map I need to add the following: <map_request title="Hello World" datasource="cagis" format="GIF_STREAM"/> Those angle brackets were the source of my headache, BIPs XSLT engine was attempting to process them rather than just pass them. Hok Min to the rescue ... again. I owe him lunch when I get out to HQ again! To solve the problem, I needed to escape all the characters and white space and then use native XSL to assign the string to a parameter. <xsl:param xdofo:ctx="begin"name="pXML">%3Cmap_request%20title=%22Hello%20World%22 %20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E</xsl:param> I did not need to assign it to a parameter but I felt that if I were going to do anything more serious than Hello World like plotting points of interest on the map. I would need to dynamically build the URL, so using a set of parameters or variables that I then concatenated would be easier. Now I had the initial server string and the request all I then did was combine the two using a concat: concat($mURL,$pXML) Embedding that into an image tag: <fo:external-graphic src="url({concat($mURL,$pXML)})"/> and I was done. Notice the curly braces to get the concat evaluated prior to the image call. As you will see next time, building the XML message to go onto the URL can get quite complex but I have used it with some data. Ultimately, it would be easier to build an extension to BIP to handle the data to be plotted, it would then build the XML message, call mapviewer and return a URL to the map image for BIP to render. More on that next time ...

    Read the article

  • Merck Serono Gains Deep Understanding of Product Portfolio Value-Drivers, Risks, and Sales Expectations Through Forecasting Solution

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Merck Serono S.A. is the biopharmaceutical division of Merck KGaA. It offers leading brands in 150 countries to help patients with cancer, multiple sclerosis, infertility, endocrine and metabolic disorders, as well as cardiovascular diseases. Challenges: Establish a better decision-making framework for its complex, development portfolio of pharmaceutical products, where single-point estimates or expected averages of portfolio values, portfolio risks, and sales forecasts are insufficient and can be misleading Enable the company to be aware at all times of the range of possible outcomes of technical and market risks and uncertainties, such as the technical uncertainty of whether a product will produce the desired clinical outcomes, or the market-related uncertainty of whether a product will be outperformed by its competitors Solutions to Overcome the Challenges: Used Oracle Crystal Ball to devise a Monte-Carlo-based approach to better analyze and define the values and risks of the company’s development portfolio, laying the groundwork for optimized decision-making Enabled a better understanding of the range of potential values and risks to improve portfolio planning Enabled detailed analysis of the likelihood of favorable or unfavorable outcomes, such as the likelihood of whether Merck Serono can meet its sales targets planned for the next ten years with its existing product portfolio Gained the ability to take into account correlative risks, synergies and project interactions, enabling Merck Serono to better forecast what the company may achieve—for example, that there is a 70% probability of a particular sales target being met Established Monte-Carlo-based analysis using Oracle Crystal Ball as a useful element in decision-making at the board level, as the approach provides a better analysis of values and risks associated with the company’s product portfolio “Oracle Crystal Ball enables us to make Monte Carlo simulations of the potential value and sales of our development portfolio. It is a very powerful tool for gaining a thorough understanding and improved awareness of value drivers, uncertainties, and risks, along with associated probabilities.” – Riccardo Lampariello, Associate Director, Merck Serono S.A Why Oracle “We chose Oracle Crystal Ball to enable us to perform Monte Carlo analysis, which gives us a deeper understanding and improved awareness of the value drivers, uncertainties and risks of our portfolio of development projects,” said Kimber Hardy, head of valuation and analysis, Merck Serono S.A. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Click here to read the full version of the customer success story Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • 5 minutes WIF: Make your ASP.NET application use test-STS

    - by DigiMortal
    Windows Identity Foundation (WIF) provides us with simple and dummy STS application we can use to develop our system with no actual STS in place. In this posting I will show you how to add STS support to your existing application and how to generate dummy application that plays you real STS. Word of caution! Although it is relatively easy to build your own STS using WIF tools I don’t recommend you to build it. Identity providers must be highly secure and stable in every means and this makes development of your own STS very complex task. If it is possible then use some known STS solution. I suppose you have WIF and WIF SDK installed on your development machine. If you don’t then here are the links to download pages: Windows Identity Foundation Windows Identity Foundation SDK Adding STS support to your web application Suppose you have web application and you want to externalize authentication so your application is able to detect users, send unauthenticated users to login and work in other terms exactly like it worked before. WIF tools provide you with all you need. 1. Click on your web application project and select “Add STS reference…” from context menu to start adding or updating STS settings for web application. 2. Insert your application URI in application settings window. Note that web.config file is already selected for you. I inserted URI that corresponds to my web application address under IIS Express. This URI must exist (later) because otherwise you cannot use dummy STS service. 3. Select “Create a new STS project in the current solution” and click Next button. 4. Summary screen gives you information about how your site will use STS. You can run this wizard always when you have to modify STS parameters. Click Finish. If everything goes like expected then new web site will be added to your solution and it is named as YourWebAppName_STS. Dummy STS application Image on right shows you dummy STS web site. Yes, it is created as web site project not as web application. But it still works nice and you don’t have to make there any modifications. It just works but it is dummy one. Why dummy STS? Some points about dummy STS web site: Dummy STS is not template for your own custom STS identity provider. Dummy STS is very good and simple replacement of real STS so you have more flexible development environment and you don’t have to authenticate yourself in real service. Of course, you can modify dummy STS web site to mimic some behavior of your real STS. Pages in dummy STS Dummy STS has two pages – Login.aspx and  Default.aspx. Default.aspx is the page that handles requests to STS service. Login.aspx is the page where authentication takes place. Dummy STS authenticates users using FBA. You can insert whatever username you like and dummy STS still works. You can take a look at the code behind these pages to get some idea about how this dummy service is built up. But again – this service is there to simplify your life as developer. Authenticating users using dummy STS If you are using development web server that ships with Visual Studio 2010 I suggest you to switch over to IIS or IIS Express and make some more configuration changes as described in my previous posting Making WIF local STS to work with your ASP.NET application. When you are done with these little modifications you are ready to run your application and see how authentication works. If everything is okay then you are redirected to dummy STS login page when running your web application. Adam Carter is provided as username by default. If you click on submit button you are authenticated and redirected to application page. In my case it looks like this. Conclusion As you saw it is very easy to set up your own dummy STS web site for testing purposes. You coded nothing. You just ran wizard, inserted some data, modified configuration a little bit and you were done. Later, when your application goes to production you can run again this STS configuration utility and it generates correct settings for your real STS service automatically.

    Read the article

  • 7 Reasons for Abandonment in eCommerce and the need for Contextual Support by JP Saunders

    - by Tuula Fai
    Shopper confidence, or more accurately the lack thereof, is the bane of the online retailer. There are a number of questions that influence whether a shopper completes a transaction, and all of those attributes revolve around knowledge. What products are available? What products are on offer? What would be the cost of the transaction? What are my options for delivery? In general, most online businesses do a good job of answering basic questions around the products as the shopper engages in the online journey, navigating the product catalog and working through the checkout process. The needs that are harder to address for the shopper are those that are less concerned with product specifics and more concerned with deciding whether the transaction met their needs and delivered value. A recent study by the Baymard Institute [1] finds that more than 60% of ecommerce site visitors will abandon their shopping cart. The study also identifies seven reasons for abandonment out of the commerce process [2]. Most of those reasons come down to poor usability within the commerce experience. Distractions. External distractions within the shopper’s external environment (TV, Children, Pets, etc.) or distractions on the eCommerce page can drive shopper abandonment. Ideally, the selection and check-out process should be straightforward. One common distraction is to drive the shopper away from the task at hand through pop-ups or re-directs. The shopper engaging with support information in the checkout process should not be directed away from the page to consume support. Though confidence may improve, the distraction also means abandonment may increase. Poor Usability. When the experience gets more complicated, buyer’s remorse can set in. While knowledge drives confidence, a lack of understanding erodes it. Therefore it is important that the commerce process is streamlined. In some cases, the number of clicks to complete a purchase is lengthy and unavoidable. In these situations, it is vital to ensure that the complexity of your experience can be explained with contextual support to avoid abandonment. If you can illustrate the solution to a complex action while the user is engaged in that action and address customer frustrations with your checkout process before they arise, you can decrease abandonment. Fraud. The perception of potential fraud can be enough to deter a buyer. Does your site look credible? Can shoppers trust your brand? Providing answers on the security of your experience and the levels of protection applied to profile information may play as big a role in ensuring the sale, as does the support you provide on the product offerings and purchasing process. Does it fit? If it is a clothing item or oversized furniture item, another common form of abandonment is for the shopper to question whether the item can be worn by the intended user. Providing information on the sizing applied to clothing, physical dimensions, and limitations on delivery/returns of oversized items will also assist the sale. A photo alone of the item will help, as it answers some of those questions, but won’t assuage all customer concerns about sizing and fit. Sometimes the customer doesn’t want to buy. Prospective buyers might be browsing through your catalog to kill time, or just might not have the money to purchase the item! You are unlikely to provide any information in contextual support to increase the likelihood to buy if the shopper already has no intentions of doing so. The customer will still likely abandon. Ensuring that any questions are proactively answered as they browse through your site can only increase their likelihood to return and buy at a future date. Can’t Buy. Errors or complexity at checkout can be another major cause of abandonment. Good contextual support is unlikely to help with severe errors caused by technical issues on your site, but it will have a big impact on customers struggling with complexity in the checkout process and needing a question answered prior to completing the sale. Embedded support within the checkout process to patiently explain how to complete a task will help increase conversion rates. Additional Costs. Tax, shipping and other costs or duties can dramatically increase the cost of the purchase and when unexpected, can increase abandonment, particularly if they can’t be adequately explained. Again, a lack of knowledge erodes confidence in the purchase, and cost concerns in particular, erode the perception of your brand’s trustworthiness. Again, providing information on what costs are additive and why they are being levied can decrease the likelihood that the customer will abandon out of the experience. Knowledge drives confidence and confidence drives conversion. If you’d like to understand best practices in providing contextual customer support in eCommerce to provide your shoppers with confidence, download the Oracle Cloud Service and Oracle Commerce - Contextual Support in Commerce White Paper. This white paper discusses the process of adding customer support, including a suggested process for finding where knowledge has the most influence on your shoppers and practical step-by-step illustrations on how contextual self-service can be added to your online commerce experience. Resources: [1] http://baymard.com/checkout-usability [2] http://baymard.com/blog/cart-abandonment

    Read the article

  • PeopleSoft at Alliance 2012 Executive Forum

    - by John Webb
    Guest Posting From Rebekah Jackson This week I jointed over 4,800 Higher Ed and Public Sector customers and partners in Nashville at our annual Alliance conference.   I got lost easily in the hallways of the sprawling Gaylord Opryland Hotel. I carried the resort map with me, and I would still stand for several minutes at a very confusing junction, studying the map and the signage on the walls. Hallways led off in many directions, some with elevators going down here and stairs going up there. When I took a wrong turn I would instantly feel stuck, lose my bearings, and occasionally even have to send out a call for help.    It strikes me that the theme for the Executive Forum this year outlines a less tangible but equally disorienting set of challenges that our higher education customer’s CIOs are facing: Making Decisions at the Intersection of Business Value, Strategic Investment, and Enterprise Technology. The forces acting upon higher education institutions today are not neat, straight-forward decision points, where one can glance to the right, glance to the left, and then quickly choose the best course of action. The operational, technological, and strategic factors that must be considered are complex, interrelated, messy…and the stakes are high. Michael Horn, co-author of “Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns”, set the tone for the day. He introduced the model of disruptive innovation, which grew out of the research he and his colleagues have done on ‘Why Successful Organizations Fail’. Highly simplified, the pattern he shared is that things start out decentralized, take a leap to extreme centralization, and then experience progressive decentralization. Using computers as an example, we started with a slide rule, then developed the computer which centralized in the form of mainframes, and gradually decentralized to mini-computers, desktop computers, laptops, and now mobile devices. According to Michael, you have more computing power in your cell phone than existed on the planet 60 years ago, or was on the first rocket that went to the moon. Applying this pattern to Higher Education means the introduction of expensive and prestigious private universities, followed by the advent of state schools, then by community colleges, and now online education. Michael shared statistics that indicate 50% of students will be taking at least one on line course by 2014…and by some measures, that’s already the case today. The implication is that technology moves from being the backbone of the campus, the IT department’s domain, and pushes into the academic core of the institution. Innovative programs are underway at many schools like Bellevue and BYU Idaho, joined by startups and disruptive new players like the Khan Academy.   This presents both threat and opportunity for higher education institutions, and means that IT decisions cannot afford to be disconnected from the institution’s strategic plan. Subsequent sessions explored this theme.    Theo Bosnak, from Attain, discussed the model they use for assessing the complete picture of an institution’s financial health. Compounding the issue are the dramatic trends occurring in technology and the vendors that provide it. Ovum analyst Nicole Engelbert, shared her insights next and suggested that incremental changes are no longer an option, instead fundamental changes are affecting the landscape of enterprise technology in higher ed.    Nicole closed with her recommendation that institutions focus on the trends in higher education with an eye towards the strategic requirements and business value first. Technology then is the enabler.   The last presentation of the day was from Tom Fisher, Sr. Vice President of Cloud Services at Oracle. Tom runs the delivery arm of the Cloud Services group, and shared his thoughts candidly about his experiences with cloud deployments as well as key issues around managing costs and security in cloud deployments. Okay, we’ve covered a lot of ground at this point, from financials planning, business strategy, and cloud computing, with the possibility that half of the institutions in the US might not be around in their current form 10 years from now. Did I forget to mention that was raised in the morning session? Seems a little hard to believe, and yet Michael Horn made a compelling point. Apparently 100 years ago, 8 of the top 10 education institutions in the world were German. Today, the leading German school is ranked somewhere in the 40’s or 50’s. What will the landscape be 100 years from now? Will there be an institution from China, India, or Brazil in the top 10? As Nicole suggested, maybe US parents will be sending their children to schools overseas much sooner, faced with the ever-increasing costs of a US based education. Will corporations begin to view skill-based certification from an online provider as a viable alternative to a 4 year degree from an accredited institution, fundamentally altering the education industry as we know it?

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 6: Chaining multiple Token Services

    - by Your DisplayName here!
    See the previous posts first. So far we looked at the (simpler) scenario where a client acquires a token from an identity provider and uses that for authentication against a relying party WCF service. Another common scenario is, that the client first requests a token from an identity provider, and then uses this token to request a new token from a Resource STS or a partner’s federation gateway. This sounds complicated, but is actually very easy to achieve using WIF’s WS-Trust client support. The sequence is like this: Request a token from an identity provider. You use some “bootstrap” credential for that like Windows integrated, UserName or a client certificate. The realm used for this request is the identifier of the Resource STS/federation gateway. Use the resulting token to request a new token from the Resource STS/federation gateway. The realm for this request would be the ultimate service you want to talk to. Use this resulting token to authenticate against the ultimate service. Step 1 is very much the same as the code I have shown in the last post. In the following snippet, I use a client certificate to get a token from my STS: private static SecurityToken GetIdPToken() {     var factory = new WSTrustChannelFactory(         new CertificateWSTrustBinding(SecurityMode.TransportWithMessageCredential,         idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       factory.Credentials.ClientCertificate.SetCertificate(         StoreLocation.CurrentUser,         StoreName.My,         X509FindType.FindBySubjectDistinguishedName,         "CN=Client");       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(rstsRealm),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } To use a token to request another token is slightly different. First the IssuedTokenWSTrustBinding is used and second the channel factory extension methods are used to send the identity provider token to the Resource STS: private static SecurityToken GetRSTSToken(SecurityToken idpToken) {     var binding = new IssuedTokenWSTrustBinding();     binding.SecurityMode = SecurityMode.TransportWithMessageCredential;       var factory = new WSTrustChannelFactory(         binding,         rstsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.SupportInteractive = false;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcRealm),         KeyType = KeyTypes.Symmetric     };       factory.ConfigureChannelFactory();     var channel = factory.CreateChannelWithIssuedToken(idpToken);     return channel.Issue(rst); } For this particular case I chose an ADFS endpoint for issued token authentication (see part 1 for more background). Calling the service now works exactly like I described in my last post. You may now wonder if the same thing can be also achieved using configuration only – absolutely. But there are some gotchas. First of all the configuration files becomes quite complex. As we discussed in part 4, the bindings must be nested for WCF to unwind the token call-stack. But in this case svcutil cannot resolve the first hop since it cannot use metadata to inspect the identity provider. This binding must be supplied manually. The other issue is around the value for the realm/appliesTo when requesting a token for the R-STS. Using the manual approach you have full control over that parameter and you can simply use the R-STS issuer URI. Using the configuration approach, the exact address of the R-STS endpoint will be used. This means that you may have to register multiple R-STS endpoints in the identity provider. Another issue you will run into is, that ADFS does only accepts its configured issuer URI as a known realm by default. You’d have to manually add more audience URIs for the specific endpoints using the ADFS Powershell commandlets. I prefer the “manual” approach. That’s it. Hope this is useful information.

    Read the article

  • Introducing jLight &ndash; Talking to the DOM using Silverlight and jQuery.

    - by Timmy Kokke
    Introduction With the recent news about Silverlight on the Windows Phone and all the great Out-Of-Browser features in the upcoming Silverlight 4 you almost forget Silverlight is a browser plugin. It most often runs in a web browser and often as a control. In many cases you need to communicate with the browser to get information about textboxes, events or details about the browser itself. To do this you can use JavaScript from Silverlight. Although Silverlight works the same on every browser, JavaScript does not and it won’t be long before problems arise. To overcome differences in browser I like to use jQuery. The only downside of doing this is that there’s a lot more code needed that you would normally use when you write jQuery in JavaScript. Lately, I had to catch changes is the browser scrollbar and act to the new position. I also had to move the scrollbar when the user dragged around in the Silverlight application. With jQuery it was peanuts to get and set the right attributes, but I found that I had to write a lot of code on Silverlight side.  With a few refactoring I had a separated out the plumbing into a new class and could call only a few methods on that to get the same thing done. The idea for jLight was born. jLight vs. jQuery The main purpose of jLight is to take the ease of use of jQuery and bring it into Silverlight for handling DOM interaction. For example, to change the text color of a DIV to red, in jQuery you would write: jQuery("div").css("color","red"); In jLight the same thing looks like so: jQuery.Select("div").Css("color","red");   Another example. To change the offset in of the last SPAN you could write this in jQuery : jQuery("span:last").offset({left : 10, top : 100});   In jLight this would do the same: jQuery.Select("span:last").Offset(new {left = 10, top = 100 });   Callbacks Nothing too special so far. To get the same thing done using the “normal” HtmlPage.Window.Eval, it wouldn’t require too much effort. But to wire up a handler for events from the browser it’s a whole different story. Normally you need to register ScriptMembers, ScriptableTypes or write some code in JavaScript. jLight takes care of the plumbing and provide you with an simple interface in the same way jQuery would. If you would like to handle the scroll event of the BODY of your html page, you’ll have to bind the event using jQuery and have a function call back to a registered function in Silverlight. In the example below I assume there’s a method “SomeMethod” and it is registered as a ScriptableObject as “RegisteredFromSilverlight” from Silverlight.   jQuery("body:first").scroll(function() { var sl = document.getElementbyId("SilverlightControl"); sl.content.RegisteredFromSilverlight.SomeMethod($(this)); });       Using jLight  in Silverlight the code would be even simpler. The registration of RegisteredFromSilverlight  as ScriptableObject can be omitted.  Besides that, you don’t have to write any JavaScript or evaluate strings with JavaScript.   jQuery.Select("body:first").scroll(SomeMethod);   Lambdas Using a lambda in Silverlight can make it even simpler.  Each is the jQuery equivalent of foreach in C#. It calls a function for every element found by jQuery. In this example all INPUT elements of the text type are selected. The FromObject method is used to create a jQueryObject from an object containing a ScriptObject. The Val method from jQuery is used to get the value of the INPUT elements.   jQuery.Select("input:text").Each((element, index) => { textBox1.Text += jQueryObject.FromObject(element).Val(); return null; });   Ajax One thing jQuery is often used for is making Ajax calls. Making calls to services to external services can be done from Silverlight, but as easy as using jQuery. As an example I would like to show how jLight does this. Below is the entire code behind. It searches my name on twitter and shows the result. This example can be found in the source of the project. The GetJson method passes a Silverlight JsonValue to a callback. This callback instantiates Twit objects and adds them to a ListBox called TwitList.   public partial class DemoPage2 : UserControl { public DemoPage2() { InitializeComponent(); jQuery.Load(); }   private void CallButton_Click(object sender, RoutedEventArgs e) { jQuery.GetJson("http://search.twitter.com/search.json?lang=en&q=sorskoot", Done); }   private void Done(JsonValue arg) { var tweets = new List<Twit>(); foreach (JsonObject result in arg["results"]) { tweets.Add(new Twit() { Text = (string)result["text"], Image = (string)result["profile_image_url"], User = (string)result["from_user"] } ); } TwitList.ItemsSource = tweets; } }   public class Twit { public string User { get; set; } public string Image { get; set; } public string Text { get; set; } }   Conclusion Although jLight is still in development it can be used already.There isn’t much documentation yet, but if you know jQuery jLight isn’t very hard to use.  If you would like to try it, please let me know what you think and report any problems you run in to. jLight can be found at:   http://jlight.codeplex.com

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • Package Manager Console For More Than Managing Packages

    - by Steve Michelotti
    Like most developers, I prefer to not have to pick up the mouse if I don’t have to. I use the Executor launcher for almost everything so it’s extremely rare for me to ever click the “Start” button in Windows. I also use shortcuts keys when I can so I don’t have to pick up the mouse. By now most people know that the Package Manager Console that comes with NuGet is PowerShell embedded inside of Visual Studio. It is based on its PowerConsole predecessor which was the first (that I’m aware of) to embed PowerShell inside of Visual Studio and give access to the Visual Studio automation DTE object. It does this through an inherent $dte variable that is automatically available and ready for use. This variable is also available inside of the NuGet Package Manager console. Adding a new class file to a Visual Studio project is one of those mundane tasks that should be easier. First I have to pick up the mouse. Then I have to right-click where I want it file to go and select “Add –> New Item…” or “Add –> Class…”   If you know the Ctrl+Shift+A shortcut, then you can avoid the mouse for adding a new item but you have to manually assign a shortcut for adding a new class. At this point it pops up a dialog just so I can enter the name of the class I want. Since this is one of the most common tasks developers do, I figure there has to be an easier way and a way that avoids picking up the mouse and popping up dialogs. This is where your embedded PowerShell prompt in Visual Studio comes in. The first thing you should do is to assign a keyboard shortcut so that you can get a PowerShell prompt (i.e., the Package Manager console) quickly without ever picking up the mouse. I assign “Ctrl+P, Ctrl+M” because “P + M” stands for “Package Manager” so it is easy to remember:   At this point I can type this command to add a new class: PM> $dte.ItemOperations.AddNewItem("Code\Class", "Foo.cs") which will result in the class being added: At this point I’ve satisfied my original goal of not having to pick up a mouse and not having the “Add New Item” dialog pop up. However, having to remember that $dte method call is not very user-friendly at all. The best thing to do is to make this a re-usable function that always loads when Visual Studio starts up. There is a $profile variable that you can use to figure out where that location is for your machine: PM> $profile C:\Users\steve.michelotti\Documents\WindowsPowerShell\NuGet_profile.ps1 If the NuGet_profile.ps1 file does not already exist, you can just create it yourself and place it in the directory. Now you can put a function inside like this: 1: function addClass($className) 2: { 3: if ($className.EndsWith(".cs") -eq $false) { 4: $className = $className + ".cs" 5: } 6: 7: $dte.ItemOperations.AddNewItem("Code\Class", $className) 8: } Since it’s in the NuGet_profile.ps1 file, this function will automatically always be available for me after starting Visual Studio. Now I can simply do this: PM> addClass Foo At this point, we have a *very* nice developer experience. All I did to add a new class was: “Ctrl-P, Ctrl-M”, then “addClass Foo”. No mouse, no pop up dialogs, no complex commands to remember. In fact, PowerShell gives you auto-completion as well. If I type “addc” followed by [TAB], then intellisense pops up: You can see my custom function appear in intellisense above. Now I can type the next letter “c” and [TAB] to auto-complete the command. And if that’s still too many key strokes for you, then you can create your own PowerShell custom alias for your function like this: PM> Set-Alias addc addClass PM> addc Foo While all this is very useful, I did run into some issues which prompted me to make even further customization. This command will add the new class file to the current active directory. Depending on your context, this may not be what you want. For example, by convention all view model objects go in the “Models” folder in an MVC project. So if the current document is in the Controllers folder, it will add your class to that folder which is not what you want. You want it to always add it to the “Models” folder if you are adding a new model in an MVC project. For this situation, I added a new function called “addModel” which looks like this: 1: function addModel($className) 2: { 3: if ($className.EndsWith(".cs") -eq $false) { 4: $className = $className + ".cs" 5: } 6: 7: $modelsDir = $dte.ActiveSolutionProjects[0].UniqueName.Replace(".csproj", "") + "\Models" 8: $dte.Windows.Item([EnvDTE.Constants]::vsWindowKindSolutionExplorer).Activate() 9: $dte.ActiveWindow.Object.GetItem($modelsDir).Select([EnvDTE.vsUISelectionType]::vsUISelectionTypeSelect) 10: $dte.ItemOperations.AddNewItem("Code\Class", $className) 11: } First I figure out the path to the Models directory on line #7. Then I activate the Solution Explorer window on line #8. Then I make sure the Models directory is selected so that my context is correct when I add the new class and it will be added to the Models directory as desired. These are just a couple of examples for things you can do with the PowerShell prompt that you have available in the Package Manager console. As developers we spend so much time in Visual Studio, why would you not customize it so that you can work in whatever way you want to work?! The next time you’re not happy about the way Visual Studio makes you do a particular task – automate it! The sky is the limit.

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >