Search Results

Search found 14657 results on 587 pages for 'solaris studio'.

Page 10/587 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • Announcement: Oracle Solaris 11.1

    - by uwes
    On October 3rd at Oracle OpenWorld John Fowler announced Oracle Solaris 11.1 . This first update to Oracle Solaris 11 increases uptime for the Oracle Database: 8x faster database shutdown and start-up Helps DBAs find and resolve I/O issues increasing performance 1.2x Oracle RAC throughput Oracle Solaris 11.1 drives up network utilization by extending network virtualization to include Edge Virtual Bridging and Data Center Bridging that help manage network bandwidth for high priority services and applications. Learn more and share these valuable tools with your customers to encourage them to deploy Oracle Solaris 11.1 Read Press Release here Oracle Solaris 11.1 Data Sheet (PDF) What's New in Solaris 11.1 Oracle Solaris 11.1 FAQs Join the the online web event Oracle Solaris 11 Innovations for your Data Center on November 7, 2012

    Read the article

  • New Study Guide: "Oracle Solaris 11 System Administration"

    - by Harold Green
    A new helpful resource for Solaris 11 exam preparation has just been released. "Oracle Solaris 11 System Administration" by author and educator Bill Calkins covers effective installation and administration of an Oracle Solaris 11 system. In addition to being a valuable, comprehensive study guide, the book also serves as a complete reference guide for the everyday tasks of an Oracle Solaris System Administrator. This book can be a valuable addition to your preparation for the Oracle Solaris 11 Advanced System Administration (1Z1-822) certification exam. This exam, combined with the Oracle Certified Associate, Oracle Solaris 11 System Administrator (OCA) certification and a training requirement will earn you the Oracle Certified Professional, Oracle Solaris 11 System Administrator (OCP) certification. This valuable credential is designed for Oracle Solaris System Administrators with a strong foundation in the Oracle Solaris 11 Operating System as well as a fundamental understanding of the UNIX operating system, commands and utilities. This certification covers topics on core elements such as: configuring network interfaces, managing swap configurations, crash dumps, and core files. The 822 exam is currently in beta at the greatly discounted rate of $50 USD, but the beta period will soon be closing (likely the end of this month/June 2013), so be take advantage of the opportunity to be one of the first to hold this new certification.  Bill Calkins also recently posted some tips for taking Oracle Solaris 11 certification exams.

    Read the article

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

  • Solaris 11 Customer Maintenance Lifecycle

    - by user12244672
    Hi Folks, Welcome to my new blog, http://blogs.oracle.com/Solaris11Life , which is all about the Customer Maintenance Lifecycle for Image Packaging System (IPS) based Solaris releases, such as Solaris 11. It'll include policies, best practices, clarifications, and lots of other stuff which I hope you'll find useful as you get up to speed with Solaris 11 and IPS.   Let's start with a version of my Solaris 11 Customer Maintenance Lifecycle presentation which I gave at this year's Oracle Open World and at the recent Deutsche Oracle Anwendergruppe (DOAG - German Oracle Users Group) conference in Nürnberg. Some of you may be familiar with my Patch Corner blog, http://blogs.oracle.com/patch , which fulfilled a similar purpose for System V [five] Release 4 (SVR4) based Solaris releases, such as Solaris 10 and below. Since maintaining a Solaris 11 system is quite different to maintaining a Solaris 10 system, I thought it prudent to start this 2nd parallel blog for Solaris 11. Actually, I have an ulterior motive for starting this separate blog.  Since IPS is a single tier packaging architecture, it doesn't have any patches, only package updates.  I've therefore banned the word "patch" in Solaris 11 and introduced a swear box to which my colleagues must contribute a quarter [$0.25] every time they use the word "patch" in a public forum.  From their Oracle Open World presentations, John Fowler owes 50 cents, Liane Preza owes $1.25, and Bart Smaalders owes 75 cents.  Since I'm stinging my colleagues in what could be a lucrative enterprise, I couldn't very well discuss IPS best practices on a blog called "Patch Corner" with a URI of http://blogs.oracle.com/patch.  I simply couldn't afford all those contributions to the "patch" swear box. :) Feel free to let me know what topics you'd like covered - just post a comment in the comment box on the blog. Best Wishes, Gerry.

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • LDoms with Solaris 11

    - by Orgad Kimchi
    Oracle VM Server for SPARC (LDoms) release 2.2 came out on May 24. You can get the software, see the release notes, reference manual, and admin guide here on the Oracle VM for SPARC page. Oracle VM Server for SPARC enables you to create multiple virtual systems on a single physical system.Each virtual system is called alogical domain and runs its own instance of Oracle Solaris 10 or Oracle Solaris 11. The version of the Oracle Solaris OS software that runs on a guest domain is independent of the Oracle Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 10 OS in the primary domain, you can still run the Oracle Solaris 11 OS in a guest domain, and if you run the Oracle Solaris 11 OS in the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain In addition to that starting with the Oracle VM Server for SPARC 2.2 release you can migrate guest domain even if source and target machines have different processor type. You can migrate guest domain from a system with UltraSPARC  T2+ or SPARC T3 CPU to a system with a SPARC T4 CPU.The guest domain on the source and target system must run Solaris 11 In order to enable cross CPU migration.In addition to that you need to change the cpu-arch property value on the source system. For more information about Oracle VM Server for SPARC (LDoms) with Solaris 11 and  Cross CPU Migration refer to the following white paper

    Read the article

  • Oracle Solaris 11.1 Now Available; Learn More About It at November 7th Webcast

    - by Larry Wake
    Oracle Solaris 11.1 is now available for download -- as detailed earlier, this update to Oracle Solaris 11.1 provides new enhancements for enterprise cloud computing. Security, network, and provisioning advances, in addition to significant new performance features, make an already great release even better. For more information, you can't do better than the upcoming launch event webcast, featuring a live Q&A with Solaris engineering experts and three sessions covering what's new with Oracle Solaris 11.1 and Oracle Solaris Cluster. It's on Wednesday, November 7, at 8 AM PT; register today.

    Read the article

  • Visual Studio 2010 Extension Manager (and the new VS 2010 PowerCommands Extension)

    - by ScottGu
    This is the twenty-third in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. Today’s blog post covers some of the extensibility improvements made in VS 2010 – as well as a cool new "PowerCommands for Visual Studio 2010” extension that Microsoft just released (and which can be downloaded and used for free). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Extensibility in VS 2010 VS 2010 provides a much richer extensibility model than previous releases.  Anyone can build extensions that add, customize, and light-up the Visual Studio 2010 IDE, Code Editors, Project System and associated Designers. VS 2010 Extensions can be created using the new MEF (Managed Extensibility Framework) which is built-into .NET 4.  You can learn more about how to create VS 2010 extensions from this this blog post from the Visual Studio Team Blog. VS 2010 Extension Manager Developers building extensions can distribute them on their own (via their own web-sites or by selling them).  Visual Studio 2010 also now includes a built-in “Extension Manager” within the IDE that makes it much easier for developers to find, download, and enable extensions online.  You can launch the “Extension Manager” by selecting the Tools->Extension Manager menu option: This loads an “Extension Manager” dialog which accesses an “online gallery” at Microsoft, and then populates a list of available extensions that you can optionally download and enable within your copy of Visual Studio: There are already hundreds of cool extensions populated within the online gallery.  You can browse them by category (use the tree-view on the top-left to filter them).  Clicking “download” on any of the extensions will download, install, and enable it. PowerCommands for Visual Studio 2010 This weekend Microsoft released the free PowerCommands for Visual Studio 2010 extension to the online gallery.  You can learn more about it here, and download and install it via the “Extension Manager” above (search for PowerCommands to find it). The PowerCommands download adds dozens of useful commands to Visual Studio 2010.  Below is a screen-shot of just a few of the useful commands that it adds to the Solution Explorer context menus: Below is a list of all the commands included with this weekend’s PowerCommands for Visual Studio 2010 release: Enable/Disable PowerCommands in Options dialog This feature allows you to select which commands to enable in the Visual Studio IDE. Point to the Tools menu, then click Options. Expand the PowerCommands options, then click Commands. Check the commands you would like to enable. Note: All power commands are initially defaulted Enabled. Format document on save / Remove and Sort Usings on save The Format document on save option formats the tabs, spaces, and so on of the document being saved. It is equivalent to pointing to the Edit menu, clicking Advanced, and then clicking Format Document. The Remove and sort usings option removes unused using statements and sorts the remaining using statements in the document being saved. Note: The Remove and sort usings option is only available for C# documents. Format document on save and Remove and sort usings both are initially defaulted OFF. Clear All Panes This command clears all output panes. It can be executed from the button on the toolbar of the Output window. Copy Path This command copies the full path of the currently selected item to the clipboard. It can be executed by right-clicking one of these nodes in the Solution Explorer: The solution node; A project node; Any project item node; Any folder. Email CodeSnippet To email the lines of text you select in the code editor, right-click anywhere in the editor and then click Email CodeSnippet. Insert Guid Attribute This command adds a Guid attribute to a selected class. From the code editor, right-click anywhere within the class definition, then click Insert Guid Attribute. Show All Files This command shows the hidden files in all projects displayed in the Solution Explorer when the solution node is selected. It enhances the Show All Files button, which normally shows only the hidden files in the selected project node. Undo Close This command reopens a closed document , returning the cursor to its last position. To reopen the most recently closed document, point to the Edit menu, then click Undo Close. Alternately, you can use the CtrlShiftZ shortcut. To reopen any other recently closed document, point to the View menu, click Other Windows, and then click Undo Close Window. The Undo Close window appears, typically next to the Output window. Double-click any document in the list to reopen it. Collapse Projects This command collapses a project or projects in the Solution Explorer starting from the root selected node. Collapsing a project can increase the readability of the solution. This command can be executed from three different places: solution, solution folders and project nodes respectively. Copy Class This command copies a selected class entire content to the clipboard, renaming the class. This command is normally followed by a Paste Class command, which renames the class to avoid a compilation error. It can be executed from a single project item or a project item with dependent sub items. Paste Class This command pastes a class entire content from the clipboard, renaming the class to avoid a compilation error. This command is normally preceded by a Copy Class command. It can be executed from a project or folder node. Copy References This command copies a reference or set of references to the clipboard. It can be executed from the references node, a single reference node or set of reference nodes. Paste References This command pastes a reference or set of references from the clipboard. It can be executed from different places depending on the type of project. For CSharp projects it can be executed from the references node. For Visual Basic and Website projects it can be executed from the project node. Copy As Project Reference This command copies a project as a project reference to the clipboard. It can be executed from a project node. Edit Project File This command opens the MSBuild project file for a selected project inside Visual Studio. It combines the existing Unload Project and Edit Project commands. Open Containing Folder This command opens a Windows Explorer window pointing to the physical path of a selected item. It can be executed from a project item node Open Command Prompt This command opens a Visual Studio command prompt pointing to the physical path of a selected item. It can be executed from four different places: solution, project, folder and project item nodes respectively. Unload Projects This command unloads all projects in a solution. This can be useful in MSBuild scenarios when multiple projects are being edited. This command can be executed from the solution node. Reload Projects This command reloads all unloaded projects in a solution. It can be executed from the solution node. Remove and Sort Usings This command removes and sort using statements for all classes given a project. It is useful, for example, in removing or organizing the using statements generated by a wizard. This command can be executed from a solution node or a single project node. Extract Constant This command creates a constant definition statement for a selected text. Extracting a constant effectively names a literal value, which can improve readability. This command can be executed from the code editor by right-clicking selected text. Clear Recent File List This command clears the Visual Studio recent file list. The Clear Recent File List command brings up a Clear File dialog which allows any or all recent files to be selected. Clear Recent Project List This command clears the Visual Studio recent project list. The Clear Recent Project List command brings up a Clear File dialog which allows any or all recent projects to be selected. Transform Templates This command executes a custom tool with associated text templates items. It can be executed from a DSL project node or a DSL folder node. Close All This command closes all documents. It can be executed from a document tab. How to temporarily disable extensions Extensions provide a great way to make Visual Studio even more powerful, and can help improve your overall productivity.  One thing to keep in mind, though, is that extensions run within the Visual Studio process (DevEnv.exe) and so a bug within an extension can impact both the stability and performance of Visual Studio.  If you ever run into a situation where things seem slower than they should, or if you crash repeatedly, please temporarily disable any installed extensions and see if that fixes the problem.  You can do this for extensions that were installed via the online gallery by re-running the extension manager (using the Tools->Extension Manager menu option) and by selecting the “Installed Extensions” node on the top-left of the dialog – and then by clicking “Disable” on any of the extensions within your installed list: Hope this helps, Scott

    Read the article

  • Difference in VISUAL STUDIO VERSIONS

    - by karthik ram
    I would like to understand some fundamental VS version differences, What is the difference between Visual Studio 2008 and Visual Studio 2008 Team System? Does the Team System 2008 include VS2008, TFS 2008 & Team Explorer 2008? Furthermore, what is the difference betwene Team System 2008 and Team Suite 2008? I beleive there are no more Team Suites or Team Systems in 2010? Does Visual Studio Ultimate 2010 include all Team System components, like TFS 2010 & Team Explorer 2010, so in effect it is like an upgrade of Visual Studio 2008 Team System? Or does TFS 2010 & Team Explorer 2010 need to be installed separately?

    Read the article

  • Stuck with Visual Studio 2010 Beta 2 errors

    - by Clueless
    A ways back I installed Microsoft Visual Studio beta 2. Today I installed Visual Studio 2010 from Dreamspark (the one with the activation key baked in). This resulted in an install with no place to put an activation key (I don't have one anyways), but I still get the following error when I try to start it: Your Microsoft Visual Studio evaluation period has expired. You will need to upgrade Microsoft Visual Studio to the latest release. This didn't go away with a complete uninstall and reinstall, and I have no idea how to fix it. A quick internet search reveals that this is the error message from Beta 2 after the date at which the RTM version went live. I have a feeling that the message is due to some hanging registry entries (I went through the manual uninstall instructions here). Does anyone know how to find and eliminate all vestiges of Beta 2 or something?

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • How to upgrade your Solaris 11 with the latest available SRUs ?

    - by jim
    1 ) Follow the instructions (*) in the document "Updating the Software on Your Oracle Solaris 11 System" * see and apply "How to Configure the Oracle Solaris support Repository" then run: # pkg update --accept Reboot if necessary (see "why" in STEP 3) Note: it is possible that your "pkg" package is not up to date # pkg update WARNING: pkg(5) appears to be out of date, and should be updated before running update.  Please update pkg(5) using 'pfexec pkg install pkg:/package/pkg' and then retry the update. -> So just upgrade it: # pkg install pkg:/package/pkg Reboot if necessary (see "why" in STEP 3) By the way, note that I am using the DEV tree: # pkg publisher -PPUBLISHER                             TYPE     STATUS   URIsolaris                               origin   online   https://pkg.oracle.com/solaris/dev/ 2 ) Now our system is ready to jump to the latest SRUs so check what is available: # pkg list -af entire NAME (PUBLISHER)                     VERSION                    IFO entire                                            0.5-DOT-11-0.175.1.0.0.24-DOT-2    ---   --> Latest version available in the repository ! entire                                            0.5.11-0.175.1.0.0.24.0    --- entire                                            0.5.11-0.175.1.0.0.23.0    --- entire                                            0.5.11-0.175.1.0.0.22.1    --- entire                                            0.5.11-0.175.1.0.0.21.0    --- entire                                            0.5.11-0.175.1.0.0.20.0    --- entire                                            0.5.11-0.175.1.0.0.19.0    --- entire                                            0.5.11-0.175.1.0.0.18.0    --- entire                                            0.5.11-0.175.1.0.0.17.0    --- entire                                            0.5.11-0.175.1.0.0.16.0    --- entire                                            0.5.11-0.175.1.0.0.15.1    --- entire                                            0.5.11-0.175.1.0.0.14.0    --- entire                                            0.5.11-0.175.1.0.0.13.0    --- entire                                            0.5.11-0.175.1.0.0.12.0    --- entire                                            0.5.11-0.175.1.0.0.11.0    --- entire                                            0.5.11-0.175.1.0.0.10.0    --- entire                                            0.5.11-0.175.0.11.0.4.1    i--   --> this is at what level your OS is ! 3 ) Apply the latest SRU: (don´t forget the --accept parameter) # pkg update --accept entire-AT-0.5.11-0.175.1.0.0.24.2------------------------------------------------------------Package: pkg://solaris/consolidation/osnet/osnet-incorporation-AT-0.5.11,5.11-0.175.1.0.0.24.2:20120919T184141ZLicense: usr/src/pkg/license_files/lic_OTNOracle Technology Network Developer License AgreementOracle Solaris, Oracle Solaris Cluster and Oracle Solaris ExpressEXPORT CONTROLSSelecting the "Accept License Agreement" button is a confirmationof your agreement that you comply, now and during the trial term(if applicable), with each of the following statements:-You are not a citizen, national, or resident of, and are not undercontrol of, the government of Cuba, Iran, Sudan, North Korea, Syria,or any country to which the United States has prohibited export. <....> Packages to remove:  10 Packages to install:  38 Packages to update: 443 Mediators to change:   2 Create boot environment: Yes   --> a reboot is required and a new BE will be created for you !! Create backup boot environment:  No DOWNLOAD                               PKGS       FILES    XFER (MB)Completed                                491/491 23113/23113  504.2/504.2PHASE                                        ACTIONSRemoval Phase                           10359/10359 Install Phase                               15530/15530 Update Phase                              15543/15543 PHASE                                          ITEMSPackage State Update Phase         932/932 Package Cache Update Phase       452/452 Image State Update Phase                    2/2 A clone of solaris-1 exists and has been updated and activated.On the next boot the Boot Environment solaris-2 will bemounted on '/'.  Reboot when ready to switch to this updated BE. 4 ) Reboot your system # reboot  5 ) Resources Solaris 11 Express and Solaris 11 Support Repositories Explained [ID 1021281.1] Oracle Solaris 11 Release Notes Updating the Software on Your Oracle Solaris 11 System

    Read the article

  • Oracle Solaris 11 Best Platform for Oracle Database 12c!

    - by uwes
    Sharpen your knowledge about Oracle Solaris 11 and Oracle Database 12c. Oracle Solaris Product Management has developed a host of content supporting the value of Oracle Database 12c on Oracle Solaris and Oracle Solaris on SPARC. OTN-Web Pages Oracle Solaris 11 and SPARC Oracle Solaris 11 Best Platform for Oracle Database Collateral Updated datasheet: Oracle Solaris Optimizations for the Oracle Stack Article: How Oracle Solaris Makes Oracle Database Fast Screen Cast: Analyzing Oracle Database I/O Outliers Blog: Oracle Solaris Blog OTN Garage Blog

    Read the article

  • Using progress dialog in Visual Studio extensions

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/05/23/using-progress-dialog-in-visual-studio-extensions.aspxAs a Visual Studio extension developer you are required to keep the aesthetics of Visual Studio in tact when you integrate your extension with Visual Studio. Your extension looks odd when you try to use windows controls and dialogs in your extensions. Visual Studio SDK exposes many interfaces so that your extension looks as integrated with Visual Studio as possible. When your extension is performing a long running task, you have many options to notify the progress to the user. One such option is through Visual Studio status bar. I have previously blogged about displaying progress through Visual Studio status bar. In this blog post I am going to highlight another way using IVsThreadedWaitDialog2 interface. One thing to note is, as the IVsThreadedWaitDialog2 interface name suggests it is a dialog hence user cannot perform any action when the dialog is being shown. So Visual Studio seems responsive to user, even when a task is being performed. Visual Studio itself makes use of this interface heavily. One example is when you are loading a solution (.sln) with lot of projects Visual Studio displays dialog implemented by this interface (screenshot below). So the first step is to get the instance of IVsThreadedWaitDialog2 interface using IServiceProvider interface. var dialogFactory = _serviceProvider.GetService(typeof(SVsThreadedWaitDialogFactory)) as IVsThreadedWaitDialogFactory; IVsThreadedWaitDialog2 dialog = null; if (dialogFactory != null) { dialogFactory.CreateInstance(out dialog); } So if your have the package initialized properly out object dialog will be not null and would contain the instance of IVsThreadedWaitDialog2 interface. Once the instance is got, you call the different methods to manage the dialog. I will cover 3 methods StartWaitDialog, EndWaitDialog and HasCanceled in this blog post. You show the progress dialog as below. if (dialog != null && dialog.StartWaitDialog( "Threaded Wait Dialog", "VS is Busy", "Progress text", null, "Waiting status bar text", 0, false, true) == VSConstants.S_OK) { Thread.Sleep(4000); } As you can see from the method syntax it is very similar to standard windows message box. If you pass true to the 7th parameter to StartWaitDialog method, you will also see a cancel button allowing user to cancel the running task. You can react when user cancels the task as below. bool isCancelled; dialog.HasCanceled(out isCancelled); if (isCancelled) { MessageBox.Show("Cancelled"); } Finally, you can close the dialog when you complete the task running as below. int usercancel; dialog.EndWaitDialog(out usercancel); To help you quickly experience the above code, I have created a sample. It is available for download from GitHub. The sample creates a tool window with two buttons to demo the above explained scenarios. The tool window can be accessed by clicking View –> Other Windows -> ProgressDialogDemo Window

    Read the article

  • Cluster Node Recovery Using Second Node in Solaris Cluster

    - by Onur Bingul
    Assumptions:Node 0a is the cluster node that has crashed and could not boot anymore.Node 0b is the node in cluster and in production with services active.Both nodes have their boot disk mirrored via SDS/SVM.We have many options to clone the boot disk from node 0b:- make a copy via network using the ufsdump command and pipe to ufsrestore - make a copy inserting the disk locally on node 0b and creating the third mirror with SDS- make a copy inserting the disk locally on node 0b using dd commandIn this procedure we are going to use dd command (from my experience this is the best option).Bare in mind that in the examples provided we work on Sun Fire V240 systems which have SCSI internal disks. In the case of Fibre Channel (FC) internal disks you must pay attention to the unique identifier, or World Wide Name (WWN), associated with each FC disk (in this case take a look at infodoc #40133 in order to recreate the device tree correctly).Procedure:On node 0b the boot disk is c1t0d0 (c1t1d0 mirror) and this is the VTOC:* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory      0      2    00          0   2106432   2106431      1      3    01    2106432  74630784  76737215      2      5    00          0 143349312 143349311      4      7    00   76737216  50340672 127077887      5      4    00  127077888  14683968 141761855      6      0    00  141761856   1058304 142820159      7      0    00  142820160    529152 143349311We will insert the new disk on node 0b and it will be seen as c1t2d0.1) On node 0b we make a copy via dd from disk c1t0d0s2 to disk c1t2d0s2# dd if=/dev/rdsk/c1t0d0s2 of=/dev/rdsk/c1t2d0s2 bs=8192kA copy of a 72GB disk will take approximately about 45 minutes.Note: as an alternative to make identical copy of root over network follow Document ID: 47498Title: Sun[TM] Cluster 3.0: How to Rebuild a node with Veritas Volume Manager2) Perform an fsck on disk c1t2d0 data slices:   1.  fsck -o f /dev/rdsk/c1t2d0s0 (root)   2.  fsck -o f /dev/rdsk/c1t2d0s4 (/var)   3.  fsck -o f /dev/rdsk/c1t2d0s5 (/usr)   4.  fsck -o f /dev/rdsk/c1t2d0s6 (/globaldevices)3) Mount the root file system in order to edit following files for changing the node name:# mount /dev/dsk/c1t2d0s0 /mntChange the hostname from 0b to 0a:# cd /mnt/etc# vi hosts # vi hostname.bge0 # vi hostname.bge2 # vi nodename 4) Change the /mnt/etc/vfstab from the actual:/dev/md/dsk/d201        -       -       swap    -       no      -/dev/md/dsk/d200        /dev/md/rdsk/d200       /       ufs     1       no      -/dev/md/dsk/d205        /dev/md/rdsk/d205       /usr    ufs     1       no      logging/dev/md/dsk/d204        /dev/md/rdsk/d204       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d206        /dev/md/rdsk/d206       /global/.devices/node@2 ufs     2       noglobalto this (unencapsulate disk from SDS/SVM):/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs     2       no globalIt is important that global device partition (slice 6) in the new vfstab will point to the physical partition of the disk (in our case slice 6).Be careful with the name you use for the new disk. In this case we define it as c1t0d0 because we will insert it as target 0 in node 0a.But this could be different based on the configuration you are working on.5) Remove following entry from /mnt/etc/system (part of unencapsulation procedure):rootdev:/pseudo/md@0:0,200,blk6) Correct the link shared -> ../../global/.devices/node@2/dev/md/shared in order to point to the nodeid of node 0a (in our case nodeid 1):# cd /mnt/dev/mdhow it is now.... node 0b has nodeid 2lrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@2/dev/md/shared# rm shared# ln -s ../../global/.devices/node@1/dev/md/shared sharedhow is going to be... with nodeid 1 for node 0alrwxrwxrwx   1 root     root          42 Mar 10  2005 shared ->../../global/.devices/node@1/dev/md/shared7) Change nodeid (in our case from 2 to 1):# cd /mnt/etc/cluster# vi nodeid8) Change the file /mnt/etc/path_to_inst in order to reflect the correct nodeid for node 0a:# cd /mnt/etc# vi path_to_instChange entries from node@2 to node@1 with the vi command ":%s/node@2/node@1/g"9) Write the bootblock to the disk... just in case:# /usr/sbin/installboot /usr/platform/sun4u/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0Now the disk is ready to be inserted in node 0a in order to bootup the node.10) Bootup node 0a with command "boot -sx"... this is becasue we need to make some changes in ccr files in order to recreate did environment.11) Modify cluster ccr:# cd /etc/cluster/ccr# rm did_instances# rm did_instances.bak# vi directory - remove the did_instances line.# /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/directory # grep ccr_gennum /etc/cluster/ccr/directory ccr_gennum -1 # /usr/cluster/lib/sc/ccradm -i /etc/cluster/ccr/infrastructure # grep ccr_gennum /etc/cluster/ccr/infrastructure ccr_gennum -112) Bring the node 0a down again to the ok prompt and then issue the command "boot -r"Now the node will join the cluster and from scstat and metaset command you can verify functionality. Next step is to encapsulate the boot disk in SDS/SVM and create the mirrors.In our case node 0b has metadevice name starting from d200. For this reason on node 0a we need to create metadevice starting from d100. This is just an example, you can have different names.The important thing to remember is that metadevice boot disks have different names on each node.13) Remove metadevice pointing to the boot and mirror disks (inherit from node 0b):# metaclear -r -f d200# metaclear -r -f d201# metaclear -r -f d204# metaclear -r -f d205# metaclear -r -f d206verify from metastat that no metadevices are set for boot and mirror disks.14) Encapsulate the boot disk:# metainit -f d110 1 1 c1t0d0s0# metainit d100 -m d110# metaroot d10015) Reboot node 0a.16) Create all the metadevice for slices remaining on boot disk# metainit -f d111 1 1 c1t0d0s1# metainit d101 -m d111# metainit -f d114 1 1 c1t0d0s4# metainit d104 -m d114# metainit -f d115 1 1 c1t0d0s5# metainit d105 -m d115# metainit -f d116 1 1 c1t0d0s6# metainit d106 -m d11617) Edit the vfstab in order to specifiy metadevices created:old:/dev/dsk/c1t0d0s1        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5       /usr    ufs     1       no      logging/dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4       /var    ufs     1       no      logging#/dev/md/dsk/d206       /dev/md/rdsk/d206       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6       /global/.devices/node@1 ufs      2       no  globalnew:/dev/md/dsk/d101        -       -       swap    -       no      -/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      -/dev/md/dsk/d105        /dev/md/rdsk/d105       /usr    ufs     1       no      logging/dev/md/dsk/d104        /dev/md/rdsk/d104       /var    ufs     1       no      logging#/dev/md/dsk/106       /dev/md/rdsk/d106       /globaldevices  ufs     2       yes     loggingswap    -       /tmp    tmpfs   -       yes     -/dev/md/dsk/d106        /dev/md/rdsk/d106       /global/.devices/node@1 ufs     2       noglobal18) Reboot node 0a in order to check new SDS/SVM boot configuration.19) Label the mirror disk c1t1d0 with the VTOC of boot disk c1t0d0:# prtvtoc /dev/dsk/c1t0d0s2 > /var/tmp/VTOC_c1t0d0 # fmthard -s /var/tmp/VTOC_c1t0d0 /dev/rdsk/c1t1d0s220) Put DB replica on slice 7 of disk c1t1d0:# metadb -a -c 3 /dev/dsk/c1t1d0s721) Create metadevice for mirror disk c1t1d0 and attach the new mirror side:# metainit d120 1 1 c1t1d0s0# metattach d100 d120# metainit d121 1 1 c1t1d0s1# metattach d101 d121# metainit d124 1 1 c1t1d0s4# metattach d104 d124# metainit d125 1 1 c1t1d0s5# metattach d105 d125# metainit d126 1 1 c1t1d0s6# metattach d106 d126

    Read the article

  • Sample Browser for Visual Studio 2012!

    - by pluginbaby
    Remember the "All-In-One Code Framework", a set of cool code samples available on CodePlex ? Well, the same team along with MSDN just released a Sample Browser Visual Studio Extension for Visual Studio 2012 and Visual Studio 2010. The Sample Browser Visual Studio Extension allows developers to search and download 3500+ code samples from within Visual Studio 2012 and Visual Studio 2010. If you are into Windows 8 dev like me, you’ll be happy to know that it already offers samples for WinRT with XAML and C#. Installation: http://visualstudiogallery.msdn.microsoft.com/4934b087-e6cc-44dd-b992-a71f00a2a6df

    Read the article

  • Solaris 11

    - by user9154181
    Oracle has a strict policy about not discussing product features until they appear in shipping product. Now that Solaris 11 is publically available, it is time to catch up. I will be shortly posting articles on a variety of new developments in the Solaris linkers and related bits: 64-bit Archives After 40+ years of Unix, the archive file format has run out of room. The ar and link-editor (ld) commands have been enhanced to allow archives to grow past their previous 32-bit limits. Guidance The link-editor is now willing and able to tell you how to alter your link lines in order to build better objects. Stub Objects This is one of the bigger projects I've undertaken since joining the Solaris group. Stub objects are shared objects, built entirely from mapfiles, that supply the same linking interface as the real object, while containing no code or data. You can link to them, but cannot use them at runtime. It was pretty simple to add this ability to the link-editor, but the changes to the OSnet in order to apply them to building Solaris were massive. I discuss how we came to invent stub objects, how we apply them to build the OSnet in a more parallel and scalable manner, and about the follow on opportunities that have emerged from the new stub proto area we created to hold them. The elffile Utility A new standard Solaris utility, elffile is a variant of the file utility, focused exclusively on linker related files. elffile is of particular value for examining archives, as it allows you to find out what is inside them without having to first extract the archive members into temporary files. This release has been a long time coming. I joined the Solaris group in late 2005, and this will be my first FCS. From a user perspective, Solaris 11 is probably the biggest change to Solaris since Solaris 2.0. Solaris 11 polishes the ground breaking features from Solaris 10 (DTrace, FMA, ZFS, Zones), and uses them to add a powerful new packaging system, numerous other enhacements and features, along with a huge modernization effort. I'm excited to see it go out into the world. I hope you enjoy using it as much as we did creating it. Software is never done. On to the next one...

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • Less known Solaris features: pwait

    - by user13366125
    This is a nifty small tool that i'm using quite often in scripts that stop something and do some tasks afterwards and i don't want to hassle around with the contract file system. It's not a cool feature, but it's useful and relatively less known. An example: As i wrote long ago, you should never use kill -9 because often the normal kill is intercepted by the application and it starts to do some clean up tasks first before really stopping the process. So just because kill has returned, it doesn't imply that the process is away. How do you wait for process to disappear? (more)

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >