Search Results

Search found 46104 results on 1845 pages for 'run dialog'.

Page 454/1845 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Converting a Visual Studio 2003 Web Project to a Visual Studio 2008 Web Application Project

    - by navaneeth
    This walkthrough describes how to convert a Visual Studio .NET 2002 or Visual Studio .NET 2003 Web project to a Visual Studio 2008 Web application project. The Visual Studio 2008 Web application project model is like the Visual Studio 2005 Web application project model. Therefore, the conversion processes are similar. For more information about Web application projects, see ASP.NET Web Application Projects. You can also convert from a Visual Studio .NET Web project to a Visual Studio 2008 Web site project. However, conversion to a Web application project is the approach that is supported, and gives you the convenience of tools to help with the conversion. For example, when you convert to a Visual Studio 2008 Web application project, you can use the Visual Studio Conversion Wizard to automate part of the process. For information about how to convert a Visual Studio .NET Web project to a Visual Studio 2008 Web site, see Common Web Project Conversion Issues and Solutions. There are two parts involved in converting a Visual Studio 2002 or 2003 Web project to a Visual Studio 2008 Web application project. The parts are as follows: Converting the project. You can use the Visual Studio Conversion Wizard for the initial conversion of the project and Web.config files. You can later use the Convert To Web Application command to update the project's files and structure. Upgrading the .NET Framework version of the project. You must upgrade the project's .NET Framework version to either .NET Framework 2.0 SP1 or to .NET Framework 3.5. This .NET Framework version upgrade is required because Visual Studio 2008 cannot target earlier versions of the .NET Framework. You can perform this upgrade during the project conversion, by using the Conversion Wizard. Alternatively, you can upgrade the .NET Framework version after you convert the project.   NoteYou can change a project's .NET Framework version manually. To do so, in Visual Studio open the property pages for the project, click the Application tab, and then select a new version from the Target Framework list. This walkthrough illustrates the following tasks: Opening the Visual Studio .NET project in Visual Studio 2008 and creating a backup of the project files. Upgrading the .NET Framework version that the project targets. Converting the project file and the Web.config file. Converting ASP.NET code files. Testing the converted project. Prerequisites    To complete this walkthrough, you will need: Visual Studio 2008. A Web site project that was created in Visual Studio .NET version 2002 or 2003 that compiles and runs without errors. Converting the Project and Upgrading the .NET Framework Version    To begin, you open the project in Visual Studio 2008, which starts the conversion. It offers you an opportunity to back up the project before converting it. NoteIt is strongly recommended that you back up the project. The conversion works on the original project files, which cannot be recovered if the conversion is not successful.To convert the project and back up the files In Visual Studio 2008, in the File menu, click Open and then click Project. The Open Project dialog box is displayed. Browse to the folder that contains the project or solution file for the Visual Studio .NET project, select the file, and then click Open. NoteMake sure that you open the project by using the Open Project command. If you use the Open Web Site command, the project will be converted to the Web site project format.The Conversion Wizard opens and prompts you to create a backup before converting the project. To create the backup, click Yes. Click Browse, select the folder in which the backup should be created, and then click Next. Click Finish. The backup starts. NoteThere might be significant delays as the Conversion Wizard copies files, with no updates or progress indicated. Wait until the process finishes before you continue.When the conversion finishes, the wizard prompts you to upgrade the targeted version of the .NET Framework for the project. To upgrade to the .NET Framework 3.5, click Yes. To upgrade the project to target the .NET Framework 2.0 SP1, click No. It is recommended that you leave the check box selected that asks whether you want to upgrade all Webs in the solution. If you upgrade to .NET Framework 3.5, the project's Web.config file is modified at the same time as the project file. When the upgrade and conversion have finished, a message is displayed that indicates that you have completed the first step in converting your project. Click OK. The wizard displays status information about the conversion. Click Close. Testing the Converted Project    After the conversion has finished, you can test the project to make sure that it runs. This will also help you identify code in the project that must be updated. To verify that the project runs If you know about changes that are required for the code to run with the new version of the .NET Framework, make those changes. In the Build menu, click Build. Any missing references or other compilation issues in the project are displayed in the Error List window. The most likely issues are missing assembly references or issues with dynamically generated types. In Solution Explorer, right-click the Web page that will be used to launch the application, and then click Set as Start Page. On the Debug menu, click Start Debugging. If debugging is not enabled, the Debugging Not Enabled dialog box is displayed. Select the option to add a Web.config file that has debugging enabled, and then click OK. Verify that the converted project runs as expected. Do not continue with the conversion process until all build and run-time errors are resolved. Converting ASP.NET Code Files    ASP.NET Web page files and user-control files in Visual Studio 2008 that use the code-behind model have an associated designer file. The files that you just converted will have an associated code-behind file, but no designer file. Therefore, the next step is to generate designer files. NoteOnly ASP.NET Web pages and user controls that have their code in a separate code file require a separate designer file. For pages that have inline code and no associated code file, no designer file will be generated.To convert ASP.NET code files In Solution Explorer, right-click the project node, and then click Convert To Web Application. The files are converted. Verify that the converted code files have a code file and a designer file. Build and run the project to verify the results of the conversion.

    Read the article

  • OBIEE 11.1.1 - OBIEE 11g Full Sample App on VMware Player 4

    - by user809526
    The Full Sample App is designed to run on Virtual Box. Let's describe how to run it on VMware Player 4. Open Virtualization Format Tool http://communities.vmware.com/community/vmtn/server/vsphere/automationtools/ovf VMware Player Documentation https://www.vmware.com/support/pubs/player_pubs.html Full Sample App Deployment Guide sampleapp107-vbimage-deployguide-453583.pdf INSTALL VMplayer 4.0.0 as root LINUX # sh VMware-Player-4.0.0-471780.x86_64.bundle (A new VM is not needed and can be deleted later after that installation is completed. "I will install OS later" - blank hard disk Guest: linux, Red Hat Enterprise Linux 5-64bits => rename to RHEL target: eg /a/root/vmware/ Max disk size: 5 GB (will be deleted) Disk: Single file Dummy RHEL.vmk, RHEL.vmdk is generated. "Delete VM from Disk" in VM Player.) Copy Full Sample App files to target /a/root/vmware/ WARNING: Select a target eg /a/root/vmware/ with lots of free space, 95 GB. Check checksums (md5sum). Please do it! ff85c7eacf7fb8c382e98da875e879e1  Sampleapp_v107_GA-disk1.vmdk 973258cb3c7d64ab03ae853278cf2233  Sampleapp_v107_GA-disk2.vmdk e576be16e36d810479736bfb15d050f5  Sampleapp_v107_GA-disk3.vmdk 3455df77279e53e07d5fee6712f1597d  Sampleapp_v107_GA-disk4.vmdk OVF FILE   Sampleapp_v107_GA.ovf CONVERSION $ cd /a/root/vmware/ LINUX $ /usr/bin/ovftool -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA/Sampleapp_v107_GA.ovf Disk Transfer Completed                   Completed successfully WINDOWS CYGWIN $ /cygdrive/c/VMwarePlayer/OVFTool/ovftool.exe -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA\Sampleapp_v107_GA.ovf Disk Transfer Completed Completed successfully /a/root/vmware$ du -sk 49095328    .   [50 GB already occupied] IMPORT - First start of VM Player 4: /usr/bin/vmplayer "Open a Virtual Machine" Browse to /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf [the new generated .ovf] "Import Virtual Machine" dialog Name: Sampleapp_v107_GA Location: /a/root/vmware/Sampleapp_v107_GA/storage [was /home/tdubois/vmware/Sampleapp_v107_GA] "Import" "The import failed because /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf did not pass OVF specification conformance or virtual hardware compliance checks. Click Retry to relax OVF specification..." "Retry" ; Long import /a/root/vmware/Sampleapp_v107_GA/storage/Sampleapp_v107_GA.vmx and new .vmdk files are created. /a/root/vmware$ du -sk 95551384    .   [95 GB occupied] Full Sample App GUEST SETUP "Edit VM settings" min 3GB, 2+ processors, network bridged. For OBIEE + Essbase testing use 8 GB RAM hardware. At first time lauch of Full Sample App, leave OEL booting for several minutes undisturbed. Problem with X display server may occur [/usr/bin/Xorg ; man Xorg]. "Failed to start the X server.... Would you like to view the X server output to diagnose the problem?" "No" [tab key] "Would you like to try to configure the X server? Note that you will need the root password for this." "Yes" [oracle] X Display Settings 800x600 saved in /etc/X11/xorg.conf "Trying to restart the X server" Login as root/oracle in guest OEL. In guest OEL, Virtual Machine > Install VMware Tools... Extract archive VMwareTools-8.8.0-471268.tar.gz all files in writable local directory eg /root In Terminal run Perl script # cd /root/vmware-tools-distrib ; ./vmware-install.pl [keep all default answers] Set keyboard layout System > Preferences > Keyboard > Layouts Restart X server eg System > Log Out root... , relogin Modify X resolution System > Preferences > Screen Resolution Full Sample App OEL login: oracle/oracle ; root/oracle [default US keyboard layout] Credentials are described in the 'sampleapp107-vbimage-deployguide-453583.pdf' The large files in /a/root/vmware/ /a/root/vmware/Sampleapp_v107_GA/ may be removed. FAILURE REMARK: Adding the 4 original Sampleapp_v107_GA-disks[1234].vmdk to VM Player does NOT work as described below. "Edit VM settings" "Remove" "Hard Disk" "Edit VM settings" "Add" "Hard Disk" "Next" "Use an existing virtual disk" "Browse" "Finish" "Keep existing format" "Ok" for each 4 disks settings one by one. Start VM Player 4. "You do not have write access to a partition" Allow all Sampleapp_v107 OEL linux launches. OEL stalls silently after 'Checking filesystems'.

    Read the article

  • Oracle Announces New Oracle Exastack Program for ISV Partners

    - by pfolgado
    Oracle Exastack Program Enables ISV Partners to Leverage a Scalable, Integrated Infrastructure to Deliver Their Applications Tuned and Optimized for High-Performance News Facts Enabling Independent Software Vendors (ISVs) and other members of Oracle Partner Network (OPN) to rapidly build and deliver faster, more reliable applications to end customers, Oracle today introduced Oracle Exastack Ready, available now, and Oracle Exastack Optimized, available in fall 2011 through OPN. The Oracle Exastack Program focuses on helping ISVs run their solutions on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud -- integrated systems in which the software and hardware are engineered to work together. These products provide partners with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Leveraging the new Oracle Exastack Program in which applications can qualify as Oracle Exastack Ready or Oracle Exastack Optimized, partners can use available OPN resources to optimize their applications to run faster and more reliably -- providing increased performance to their end users. By deploying their applications on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud, ISVs can reduce the cost, time and support complexities typically associated with building and maintaining a disparate application infrastructure -- enabling them to focus more on their core competencies, accelerating innovation and delivering superior value to customers. After qualifying their applications as Oracle Exastack Ready, partners can note to customers that their applications run on and support Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud component products including Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server. Customers can be confident when choosing a partner's Oracle Exastack Optimized application, knowing it has been tuned by the OPN member on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud with a goal of delivering optimum speed, scalability and reliability. Partners participating in the Oracle Exastack Program can also leverage their Oracle Exastack Ready and Oracle Exastack Optimized applications to advance to Platinum or Diamond level in OPN. Oracle Exastack Programs Provide ISVs a Reliable, High-Performance Application Infrastructure With the Oracle Exastack Program ISVs have several options to qualify and tune their applications with Oracle Exastack, including: Oracle Exastack Ready: Oracle Exastack Ready provides qualifying partners with specific branding and promotional benefits based on their adoption of Oracle products. If a partner application supports the latest major release of one of these products, the partner may use the corresponding logo with their product marketing materials: Oracle Solaris Ready, Oracle Linux Ready, Oracle Database Ready, and Oracle WebLogic Ready. Oracle Exastack Ready is available to OPN members at the Gold level or above. Additionally, OPN members participating in the program can leverage their Oracle Exastack Ready applications toward advancement to the Platinum or Diamond levels in the OPN Specialized program and toward achieving Oracle Exastack Optimized status. Oracle Exastack Optimized: When available, for OPN members at the Gold level or above, Oracle Exastack Optimized will provide direct access to Oracle technical resources and dedicated Oracle Exastack lab environments so OPN members can test and tune their applications to deliver optimal performance and scalability on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud. Oracle Exastack Optimized will provide OPN members with specific branding and promotional benefits including the use of the Oracle Exastack Optimized logo. OPN members participating in the program will also be able to leverage their Oracle Exastack Optimized applications toward advancement to Platinum or Diamond level in the OPN Specialized program. Oracle Exastack Labs and ISV Enablement: Dedicated Oracle Exastack lab environments and related technical enablement resources (including Guided Learning Paths and Boot Camps) will be available through OPN for OPN members to further their knowledge of Oracle Exastack offerings, and qualify their applications for Oracle Exastack Optimized or Oracle Exastack Ready. Oracle Exastack labs will be available to qualifying OPN members at the Gold level or above. Partners are eligible to participate in the Oracle Exastack Ready program immediately, which will help them meet the requirements to attain Oracle Exastack Optimized status in the future. Guidelines for Oracle Exastack Optimized, as well as Oracle Exastack Labs will be available in fall 2011. Supporting Quotes "In order to effectively differentiate their software applications in the marketplace, ISVs need to rapidly deliver new capabilities and performance improvements," said Judson Althoff, Oracle senior vice president of Worldwide Alliances and Channels and Embedded Sales. "With Oracle Exastack, ISVs have the ability to optimize and deploy their applications with a complete, integrated and cloud-ready infrastructure that will help them accelerate innovation, unlock new features and functionality, and deliver superior value to customers." "We view performance as absolutely critical and a key differentiator," said Tom Stock, SVP of Product Management, GoldenSource. "As a leading provider of enterprise data management solutions for securities and investment management firms, with Oracle Exadata Database Machine, we see an opportunity to notably improve data processing performance -- providing high quality 'golden copy' data in a reduced timeframe. Achieving Oracle Exastack Optimized status will be a stamp of approval that our solution will provide the performance and scalability that our customers demand." "As a leading provider of Revenue Intelligence solutions for telecommunications, media and entertainment service providers, our customers continually demand more readily accessible, enriched and pre-analyzed information to minimize their financial risks and maximize their margins," said Alon Aginsky, President and CEO of cVidya Networks. "Oracle Exastack enables our solutions to deliver the power, infrastructure, and innovation required to transform our customers' business operations and stay ahead of the game." Supporting Resources Oracle PartnerNetwork (OPN) Oracle Exastack Oracle Exastack Datasheet Judson Althoff blog Connect with the Oracle Partner community at OPN on Facebook, OPN on LinkedIn, OPN on YouTube, or OPN on Twitter

    Read the article

  • Install Ubuntu Netbook Edition with Wubi Installer

    - by Matthew Guay
    Ubuntu is one of the most popular versions of Linux, and their Netbook Remix edition is especially attractive for netbook owners.  Here we’ll look at how you can easily try out Ubuntu on your netbook without a CD/DVD drive. Netbooks, along with the growing number of thin, full powered laptops, lack a CD/DVD drive.  Installing software isn’t much of a problem since most programs, whether free or for-pay, are available for download.  Operating systems, however, are usually installed from a disk.  You can easily install Windows 7 from a flash drive with our tutorial, but installing Ubuntu from a USB flash drive is more complicated.  However, using Wubi, a Windows installer for Ubuntu, you can easily install it directly on your netbook and even uninstall it with only a few clicks. Getting Started Download and run the Wubi installer for Ubuntu (link below).  In the installer, select the drive you where you wish to install Ubuntu, the size of the installation (this is the amount dedicated to Ubuntu; under 20Gb should be fine), language, username, and desired password.  Also, from the Desktop environment menu, select Ubuntu Netbook to install the netbook edition.  Click Install when your settings are correct. Wubi will automatically download the selected version of Ubuntu and install it on your computer. Windows Firewall may ask if you want to unblock Wubi; select your network and click Allow access. The download will take around an hour on broadband, depending on your internet connection speed.  Once the download is completed, it will automatically install to your computer.  If you’d prefer to have everything downloaded before you start the install, download the ISO of Ubuntu Netbook edition (link below) and save it in the same folder as Wubi. Then, when you run Wubi, select the netbook edition as before and click Install.  Wubi will verify that your download is valid, and will then proceed to install from the downloaded ISO.  This install will only take about 10 minutes. Once the install is finished you will be asked to reboot your computer.  Save anything else you’re working on, and then reboot to finish setting up Ubuntu on your netbook. When your computer reboots, select Ubuntu at the boot screen.  Wubi leaves the default OS as Windows 7, so if you don’t select anything it will boot into Windows 7 after a few seconds. Ubuntu will automatically finish the install when you boot into it the first time.  This took about 12 minutes in our test. When the setup is finished, your netbook will reboot one more time.  Remember again to select Ubuntu at the boot screen.  You’ll then see a second boot screen; press your Enter key to select the default.   Ubuntu only took less than a minute to boot in our test.  When you see the login screen, select your name and enter your password you setup in Wubi.  Now you’re ready to start exploring Ubuntu Netbook Remix. Using Ubuntu Netbook Remix Ubuntu Netbook Remix offers a simple, full-screen interface to take the best advantage of netbooks’ small screens.  Pre-installed applications are displayed in the application launcher, and are organized by category.  Click once to open an application. The first screen on the application launcher shows your favorite programs.  If you’d like to add another application to the favorites pane, click the plus sign beside its icon. Your files from Windows are still accessible from Ubuntu Netbook Remix.  From the home screen, select Files & Folders on the left menu, and then click the icon that says something like 100GB Filesystem under the Volumes section. Now you’ll be able to see all of your files from Windows.  Your user files such as documents, music, and pictures should be located in Documents and Settings in a folder with your user name. You can also easily install a variety of free applications via the Software Installer. Connecting to the internet is also easy, as Ubuntu Netbook Remix automatically recognized the WiFi adaptor on our test netbook, a Samsung N150.  To connect to a wireless network, click the wireless icon on the top right of the screen and select the network’s name from the list. And, if you’d like to customize your screen, right-click on the application launcher and select Change desktop background. Choose a background picture you’d like. Now you’ll see it through your application launcher.  Nice! Most applications are opened full-screen.  You can close them by clicking the x on the right of the program’s name. You can also switch to other applications from their icons on the top left.  Open the home screen by clicking the Ubuntu logo in the far left. Changing Boot Options By default, Wubi will leave Windows as the default operating system, and will give you 10 seconds at boot to choose to boot into Ubuntu.  To change this, boot into Windows and enter Advanced system settings in your start menu search. In this dialog, click Settings under Startup and Recovery. From this dialog, you can select the default operating system and the time to display list of operating systems.  You can enter a lower number to make the boot screen appear for less time. And if you’d rather make Ubuntu the default operating system, select it from the drop-down list.   Uninstalling Ubuntu Netbook Remix If you decide you don’t want to keep Ubuntu Netbook Remix on your computer, you can uninstall it just like you uninstall any normal application.  Boot your computer into Windows, open Control Panel, click Uninstall a Program, and enter ubuntu in the search box.  Select it, and click Uninstall. Click Uninstall at the prompt.  Ubuntu uninstalls very quickly, and removes the entry from the bootloader as well, so your computer is just like it was before you installed it.   Conclusion Ubuntu Netbook Remix offers an attractive Linux interface for netbooks.  We enjoyed trying it out, and found it much more user-friendly than most Linux distros.  And with the Wubi installer, you can install it risk-free and try it out on your netbook.  Or, if you’d like to try out another alternate netbook operating system, check out our article on Jolicloud, another new OS for netbooks. Links Download Wubi Installer for Windows Download Ubuntu Netbook Edition Similar Articles Productive Geek Tips Easily Install Ubuntu Linux with Windows Using the Wubi InstallerInstall VMware Tools on Ubuntu Edgy EftHow to install Spotify in Ubuntu 9.10 using WineInstalling PHP5 and Apache on UbuntuInstalling PHP4 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • SQL SERVER – Spatial Database Queries – What About BLOB – T-SQL Tuesday #006

    - by pinaldave
    Michael Coles is one of the most interesting book authors I have ever met. He has a flair of writing complex stuff in a simple language. There are a very few people like that.  I really enjoyed reading his recent book, Expert SQL Server 2008 Encryption. I strongly suggest taking a look at it. This blog is written in response to T-SQL Tuesday #006: “What About BLOB? by Michael Coles. Spatial Database is my favorite subject. Since I did my TechEd India 2010 presentation, I have enjoyed this subject a lot. Before I continue this blog post, there are a few other blog posts, so I suggest you read them.  To help build the environment run the queries, I am going to present them in this single blog post. SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spatial Indexing This blog post explains the basics of Spatial Database and also provides a good introduction to Indexing concept. SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database This blog post will enable you with how to load the shape file into database. SQL SERVER – Spatial Database Definition and Research Documents This blog post links to the white paper about Spatial Database written by Microsoft experts. SQL SERVER – Introduction to Spatial Coordinate Systems: Flat Maps for a Round Planet This blog post links to the white paper explaining coordinate system, as written by Microsoft experts. After reading the above listed blog posts, I am very confident that you are ready to run the following script. Once you create a database using the World Shapefile, as mentioned in the second link above,you can display the image of India just like the following. Please note that this is not an accurate political map. The boundary of this map has many errors and it is just a representation. You can run the following query to generate the map of India from the database spatial which you have created after following the instructions here. USE Spatial GO -- India Map SELECT [CountryName] ,[BorderAsGeometry] ,[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Now, let us find the longitude and latitude of the two major IT cities of India, Hyderabad and Bangalore. I find their values as the following: the values of longitude-latitude for Bangalore is 77.5833300000 13.0000000000; for Hyderabad, longitude-latitude is 78.4675900000 17.4531200000. Now, let us try to put these values on the India Map and see their location. -- Bangalore DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326).STBuffer(20000); -- Hyderabad DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326).STBuffer(20000); -- Bangalore and Hyderabad on Map of India SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation) <= 0 UNION ALL SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation1) <= 0 UNION ALL SELECT '',[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Now let us quickly draw a straight line between them. DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326).STBuffer(10000); DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326).STBuffer(10000); DECLARE @GeoLocation2 GEOGRAPHY SET @GeoLocation2 = GEOGRAPHY::STGeomFromText('LINESTRING(78.4675900000 17.4531200000, 77.5833300000 13.0000000000)',4326) SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation) <= 0 UNION ALL SELECT name, [GeoLocation] FROM [IndiaGeoNames] I1 WHERE I1.[GeoLocation].STDistance(@GeoLocation1) <= 0 UNION ALL SELECT '' name, @GeoLocation2 UNION ALL SELECT '',[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Let us use the distance function of the spatial database and find the straight line distance between this two cities. -- Distance Between Hyderabad and Bangalore DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326) DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326) SELECT @GeoLocation.STDistance(@GeoLocation1)/1000 'KM'; GO The result of above query is as displayed in following image. As per SQL Server, the distance between these two cities is 501 KM, but according to what I know, the distance between those two cities is around 562 KM by road. However, please note that roads are not straight and they have lots of turns, whereas this is a straight-line distance. What would be more accurate is the distance between these two cities by air travel. When we look at the air travel distance between Bangalore and Hyderabad, the total distance covered is 495 KM, which is very close to what SQL Server has estimated, which is 501 KM. Bravo! SQL Server has accurately provided the distance between two of the cities. SQL Server Spatial Database can be very useful simply because it is very easy to use, as demonstrated above. I appreciate your comments, so let me know what your thoughts and opinions about this are. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database

    Read the article

  • SQL Developer at Oracle Open World 2012

    - by thatjeffsmith
    We have a lot going on in San Francisco this fall. One of the most personal exciting bits, for what will be my 4th or 5th Open World, is that this will be my FIRST as a member of Team Oracle. I’ve presented once before, but most years it was just me pressing flesh at the vendor booths. After 3-4 days of standing and talking, you’re ready to just go home and not do anything for a few weeks. This time I’ll have a chance to walk around and talk with our users and get a good idea of what’s working and what’s not. Of course it will be a great opportunity for you to find us and get to know your SQL Developer team! 3.4 miles across and back – thanks Ashley for signing me up for the run! This year is going to be a bit crazy. Work wise I’ll be presenting twice, working a booth, and proctoring several of our Hands-On Labs. The fun parts will be equally crazy though – running across the Bay Bridge (I don’t run), swimming the Bay (I don’t swim), having my wife fly out on Wednesday for the concert, and then our first WhiskyFest on Friday (I do drink whisky though.) But back to work – let’s talk about EVERYTHING you can expect from the SQL Developer team. Booth Hours We’ll have 2 ‘demo pods’ in the Exhibition Hall over at Moscone South. Look for the farm of Oracle booths, we’ll be there under the signs that say ‘SQL Developer.’ There will be several people on hand, mostly developers (yes, they still count as people), who can answer your questions or demo the latest features. Come by and say ‘Hi!’, and let us know what you like and what you think we can do better. Seriously. Monday 10AM – 6PM Tuesday 9:45AM – 6PM Wednesday 9:45AM – 4PM Presentations Stop by for an hour, pull up a chair, sit back and soak in all the SQL Developer goodness. You’ll only have to suffer my bad jokes for two of the presentations, so please at least try to come to the other ones. We’ll be talking about data modeling, migrations, source control, and new features in versions 3.1 and 3.2 of SQL Developer and SQL Developer Data Modeler. Day Time Event Monday 10:454:45 What’s New in SQL Developer Why Move to Oracle Application Express Listener Tueday 10:1511:455:00 Using Subversion in Oracle SQL Developer Data Modeler Oracle SQL Developer Tips & Tricks Database Design with Oracle SQL Developer Data Modeler Wednesday 11:453:30 Migrating Third-Party Databases and Applications to Oracle Exadata 11g Enterprise Options and Management Packs for Developers Hands On Labs (HOLs) The Hands On Labs allow you to come into a classroom environment, sit down at a computer, and run through some exercises. We’ll provide the hardware, software, and training materials. It’s self-paced, but we’ll have several helpers walking around to answer questions and chat up any SQL Developer or database topic that comes to mind. If your employer is sending you to Open World for all that great training, the HOLs are a great opportunity to capitalize on that. They are only 60 minutes each, so you don’t have to worry about burning out. And there’s no homework! Of course, if you do want to take the labs home with you, many are already available via the Developer Day Hands-On Database Applications Developer Lab. You will need your own computer for those, but we’ll take care of the rest. Wednesday PL/SQL Development and Unit Testing with Oracle SQL Developer 10:15 Performance Tuning with Oracle SQL Developer 11:45 Thursday The Soup to Nuts of Data Modeling with Oracle SQL Developer Data Modeler 11:15 Some Parting Advice Always wanted to meet your favorite Oracle authors, speakers, and thought-leaders? Don’t be shy, walk right up to them and introduce yourself. Normal social rules still apply, but at the conference everyone is open and up for meeting and talking with attendees. Just understand if there’s a line that you might only get a minute or two. It’s a LONG conference though, so you’ll have plenty of time to catch up with everyone. If you’re going to be around on Tuesday evening, head on over to the OTN Lounge from 4:30 to 6:30 and hang out for our Tweet Meet. That’s right, all the Oracle nerds on Twitter will be there in one place. Be sure to put your Twitter handle on your name tag so we know who you are!

    Read the article

  • Enterprise Manager Database Control Configuration - Recovering From Errors Due to CA Expiry on Oracle Database 10.2.0.4 or 10.2.0.5 from 31-Dec-2010 onwards

    - by jayatheertha.rao(at)oracle.com
    Description What is the Issue? In Enterprise Manager Database Control with Oracle Database 10.2.0.4 and 10.2.0.5, the root certificate used to secure communications via the Secure Socket Layer (SSL) protocol will expire on 31-Dec-2010 00:00:00. The certificate expiration will cause errors if you attempt to configure Database Control on or after 31-Dec-2010. Existing Database Control configurations are not affected by this issue. Likelihood of Occurrence What Versions Are Affected? The issue impacts configuration of Database Control with Oracle Database 10.2.0.4 and 10.2.0.5 only. It does not impact database creation or upgrade. The issue does not impact existing Database Control configurations. What Happens During Database Control Configuration Failure? Database Configuration Assistant (DBCA) and Database Upgrade Assistant (DBUA) Errors Database Configuration Assistant (DBCA) and Database Upgrade Assistant (DBUA) will report the following error in the console: Could not complete the Enterprise Manager configuration.Enterprise manager configuration failed due to the following error -Error starting Database Control Enterprise Manager Configuration Assistant (EMCA) Errors Enterprise Manager Configuration Assistant (EMCA) will write errors similar to those below to the emca.log file: CONFIG: Securing Database Control completed successfully .Jan 2, 2011 7:22:47 PM oracle.sysman.emcp.ParamsManager getParamCONFIG: No value was set for the parameter ORACLE_HOSTNAME.Jan 2, 2011 7:22:47 PM oracle.sysman.emcp.util.DBControlUtil startOMSINFO: Starting Database Control (this may take a while) ...Jan 2, 2011 7:22:47 PM oracle.sysman.emcp.util.PlatformInterface addEnvVarToListCONFIG: Value for env var 'ORACLE_HOSTNAME' is '', discarding the sameCONFIG: Returning env array from cacheJan 2, 2011 7:22:47 PM oracle.sysman.emcp.util.PlatformInterface executeCommandCONFIG: Starting execution: /myhost/bin/emctl start dbconsoleJan 2, 2011 7:27:26 PM oracle.sysman.emcp.util.PlatformInterface executeCommandCONFIG: Exit value of 1Jan 2, 2011 7:27:26 PM oracle.sysman.emcp.util.PlatformInterface executeCommandCONFIG: Oracle Enterprise Manager 10g Database Control Release 10.2.0.4.0Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.https://myhost:5501/em/console/aboutApplicationStarting Oracle Enterprise Manager 10g Database Control............................................................................................. failed.------------------------------------------------------------------Logs are generated in directory /myhost/sysman/logJan 2, 2011 7:27:26 PM oracle.sysman.emcp.util.PlatformInterface executeCommandWARNING: Error executing /myhost/bin/emctl start dbconsoleJan 2, 2011 7:27:26 PM oracle.sysman.emcp.EMConfig performSEVERE: Error starting Database ControlRefer to the log file at /myhost/dbua/d4/upgrade/emConfig.log for more details.Jan 2, 2011 7:27:26 PM oracle.sysman.emcp.EMConfig performCONFIG: Stack Trace:oracle.sysman.emcp.exception.EMConfigException: Error starting Database Controlat oracle.sysman.emcp.EMDBPostConfig.performUpgrade(EMDBPostConfig.java:763)at oracle.sysman.emcp.EMDBPostConfig.invoke(EMDBPostConfig.java:232)at oracle.sysman.emcp.EMDBPostConfig.invoke(EMDBPostConfig.java:193)at oracle.sysman.emcp.EMConfig.perform(EMConfig.java:184)at oracle.sysman.assistants.util.em.EMConfiguration.run(EMConfiguration.java:436)at oracle.sysman.assistants.util.em.EMConfigStep.executeImpl(EMConfigStep.java:140)at oracle.sysman.assistants.util.step.BasicStep.execute(BasicStep.java:210)at oracle.sysman.assistants.util.step.BasicStep.callStep(BasicStep.java:251)at oracle.sysman.assistants.dbma.backend.EMConfigStep.executeStepImpl(EMConfigStep.java:104)at oracle.sysman.assistants.dbma.backend.SummarizableStep.executeImpl(SummarizableStep.java:175)at oracle.sysman.assistants.util.step.BasicStep.execute(BasicStep.java:210)at oracle.sysman.assistants.util.step.Step.execute(Step.java:140)at oracle.sysman.assistants.util.step.StepContext$ModeRunner.run(StepContext.java:2488)at java.lang.Thread.run(Thread.java:534) The EMCA console will display output similar to the following: aime@myhost09 db_1]$ bin/emca -config dbcontrol db -repos recreate -clusterSTARTED EMCA at Jan 11, 2011 4:11:01 PMEM Configuration Assistant, Version 10.2.0.1.0 ProductionCopyright (c) 2003, 2005, Oracle. All rights reserved.Enter the following information:Database unique name: catestDatabase Control is already configured for the database catestYou have chosen to configure Database Control for managing the database catestThis will remove the existing configuration and the default settings and perform a fresh configurationDo you wish to continue? [yes(Y)/no(N)]: YListener port number: 1521Cluster name: myclusterPassword for SYS user:Password for DBSNMP user:Password for SYSMAN user:Email address for notifications (optional):Outgoing Mail (SMTP) server for notifications (optional):........Jan 11, 2011 4:18:05 PM oracle.sysman.emcp.util.DBControlUtil secureDBConsoleINFO: Securing Database Control (this may take a while) ...Jan 11, 2011 4:19:31 PM oracle.sysman.emcp.util.DBControlUtil startOMSINFO: Starting Database Control (this may take a while) ...Jan 11, 2011 4:28:38 PM oracle.sysman.emcp.EMConfig performSEVERE: Error starting Database ControlRefer to the log file at /myhost/oracle/product/10.2.0/db_1/cfgtoollogs/emca/catest/emca_2011-01-11_04-11-01-PM.log for more details.Could not complete the configuration. Refer to the log file at /myhost/oracle/product/10.2.0/db_1/cfgtoollogs/emca/catest/emca_2011-01-11_04-11-01-PM.log for more details. At the end of the database installation on non-Windows platforms, both Database Control and the Management Agent will be up and running, even though the status of both components will be shown as not running, because EMCTL will be unable to connect to the dbconsole process. In addition, Database Control will fail to connect to the Agent. Note for Windows Platform Only:On Windows, the dbconsole process will be stopped after the failed configuration attempt. Note that the tool used to perform Database Control configuration (DBUA, DBCA or EMCA) will also wait for 15 minutes for Database Control to start, then time out. The output of the "emctl status dbconsole" command incorrectly returns the status of Database Control, as shown below: $ ./emctl status dbconsoleOracle Enterprise Manager 10g Database Control Release 10.2.0.1.0Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.https://myhost:1158/em/console/aboutApplicationOracle Enterprise Manager 10g is not running. The output of the "emctl status agent" command incorrectly returns the status of the Agent, as shownbelow: $ ./emctl status agentOracle Enterprise Manager 10g Database Control Release 10.2.0.1.0Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.---------------------------------------------------------------Agent is Not Running   For Solution, refer to Note: 1222603.1 Note: 1217493.1

    Read the article

  • Easily use google maps, openstreet maps etc offline.

    - by samkea
    I did it and i am going to explain step by step. The explanatination may appear long but its simple if you follow. Note: All the softwares i have used are the latest and i have packaged them and provided them in the link below. I use Nokia N96 1) RootSign smartComGPS and install it on your phone(i havent provided the signer so that u wuld do some little work. i used Secman' rootsign). 2) Install Universal Maps Downloader, SmartCom OGF2 converter and OziExplorer 3.95.4s on my PC. a) UMD is used to download map tiles from any map source like googlemaps,opensourcemaps etc... and also combine the tiles into an image file like png,jpg,bmp etc... b) SmartCom OGF2 converter is used to convert the image file into a format usable on your mobile phone. c) OziExplorer will help you to calibrate the usable map file so that it can be used with GPS on your mobile phone without the use of internet. 3) Go to google maps or where u pick your maps and pan to the area of your interest. Zoom the map to at least 15 or 16 zoom level where you can see your area clearly and the streets. 4) copy this script in a notepad file and save it on your desktop: javascript:void(prompt('',gApplication.getMap().ge tCenter())); 5) Open the universal maps downloader. You will notice that you are required to add the: left longitude, right longitude,top latitude, bottom latitude. 6) On your map in google maps, doubleclick on the your prefered to most middle point. you will notice that the map will center in that area. 7) copy the script and paste it in the address bar then press enter. You will notice that a dialog with your (top latitude) and longitude respectively pops up. 8) copy the top latitude ONLY and paste it in the corresponding textbox in the UMD. 9) repeat steps 6-7 for the botton latitude. 10)repeat steps 6-7 for left longitude and right longitude too, but u have to copy the longitudes here. (***BTW record these points in the text file as they may be needed later in calibration) 11) Give the zoom level to the same zoom level that you prefered in google maps. 12) Dont forget to choose a path to save your files and under options set the proxy connection settings in UMD if you are using so. 13) Click on start and bingo! there you have your image tiles and a file with an extension .umd will be saved in the same folder. 14) On the UMD, go to tools, click on MapViewer and choose the .umd file. you will now see your map in one piece....and you will smile! 15) Still go to tools and click on map combiner. A dialog will popup for you to choose the .umd file and to enter the IMAGE file name. u can use another extension for the image file like png, jpg etc...i usually use png. 16) Combine.....bingo! there u go! u have an IMAGE file for your map. *I SUGGEST THAT CREATE A .BMP FILE and A .PNG file* 17) Close UMD and open SmartCom OGF2 converter. 18) Choose your .png image and create an ogf2 file. 19) Connect your phone to your PC in Mass Memory mode and transfer the file to the smartComGPS\Maps folder. 20) Now disconnect your phone and load smartComGPS. it will load the map and propt you to add a calibration point. Go ahead and add one calibration point with dummy coordinates. You will notice that it will add another file with extension .map in the smartComGPS\Maps folder. 21) Connect yiur ohone and copy that file and paste it in your working folder on your PC. Delete that .map file from the phone too because you are going to edit it from your PC and put it back. 22) Now Open the OziExplorer, go to file-->Load and Calibrate Map Image. 23) Choose the .bmp image and bingo! it will load with your map in the same zoom level. 24) Now you are going to calibrate. Use the MapView window and take the small box locater to all the 4 cornners of the map. You will notice that the map in the back ground moves to that area too. 25)On the right side, select the Point1 tab. Now you are in calibration mode. Now move the red box in mapview in the left upper corner to calibrate point1. 26) out of mapview go to the the left upper corner of the background map and choose poit (0,0) and your 1st calibration point. You will notice that these X,Y cordinated will be reflected in the Point1 image cordinates. 27) now go back to the text file where you saved your coordibates and enter the top latitude and the left longitude in the corresponding places. 28) Repeat steps 25-27 for point2,point3,point4 and click on save. Thats it, you have calibrated your image and you are about to finish. 29) Go to save and a dilaog which prompts you to save a .map file will poop up. Do save the map file in your working folder. 30) Right click that .map file and edit the filename in the .map file to remove the pc's directory structure. Eg. Change C\OziExplorer\data\Kampala.bmp to Kampala.ogf2. 31) Save the .map file in the smartComGPS\Maps folder on your phone. 32) now open smartComGPS on your phone and bingo! there is your map with GPS capability and in the same zoom level. 33) In smartComGPS options, choose connect and simulate. By now you should be smiling. Whoa! Hope i was of help. i case you get a problem, please inform me Below is the link to the software. regards. http://rapidshare.com/files/230296037/Utilities_Used.rar.html Ok, the Rapidshare files i posted are gone, so you will have to download as described in the solution. If you need more help, go here: http://www.dotsis.com/mobile_phone/sitemap/t-160491.html Some months later, someone else gave almost the same kind of solution here. http://www.dotsis.com/mobile_phone/sitemap/t-180123.html Note: the solutions were mean't to help view maps on Symbian phones, but i think now they ca even do for Windows Phones, iphones and others so read, extract what you want and use it. Hope it helps. Sam Kea

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Top 10 Tips & Tricks for Oracle SQL Developer

    - by thatjeffsmith
    Being a short week due to the holiday, and with everyone enjoying their Summer vacations (apologies Southern Hemispherians), I reckoned it was a great time to do one of those lazy recap-Top 10-Reader’s Digest type posts. I’ve been sharing 1-3 tips or ‘tricks’ a week since I started blogging about SQL Developer, and I have more than enough content to write a book. But since I’m lazy, I’m just going to compile a list of my favorite ‘must know’ tips instead. I always have to leave out a few tips when I do my presentations, so now I can refer back to this list to make sure I’m not forgetting anything. So without further ado… 1. Configure Your Preferences Yes, there are a LOT of options. But you don’t need to worry about all of them just yet. I do recommend you take a quick look at these ones in particular. Whether you’re new to the tool or have been using it for 5 years, don’t overlook these settings! 2. Disable Extensions You Aren’t Using If you’re not using Data Miner, or if you’re not working on a Migration – disable those extensions! SQL Developer will run leaner & meaner, plus the user interface will be a bit more simplified making the tool easier to navigate as well. 3. SQL Recall via Keyboard Access your history via the keyboard! Cycle through your recent SQL statements just using these magic key strokes! Ctrl+Up or Ctrl+Down. 4. Format Your Query Output Directly to CSV, XML, HTML, etc Have the query results pre-formatted in the format of your choice! Too lazy to run the Export wizard for your query result sets? Just add the SQL Developer output hints to your statement and have the output auto-magically formatted to the style of your choice! 5. Drag & Drop Multiple Tables to the Worksheet SQL Developer will auto-join the related objects. You can then toggle over to the Query Builder to toggle off the columns you don’t want to query. I guarantee this tip will save you time if you’re joining 3 or more tables! 6. Drag & Drop Multiple Tables to a Relational Model A pretty picture is worth a few dozen DDL scripts? SQL Developer does data modeling! If you ctrl-drag a table to a model, it will take that table and any related tables and reverse engineer them to a relational model! You can then print it out or export it to HTML, PDF, etc. 7. View Your PL/SQL Execution Output Automatically Function returns a refcursor? Procedure had 3 out parameters? When you run these programs via the Procedure Editor, we automatically capture the output and place them into one or more data grids for you to browse. 8. Disable Automatic Code Insight and Use It On-Demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) Some folks really don’t like it when their IDEs or word-processors try to do ‘too much’ for them. Thankfully SQL Developer allows you to either increase the delay before it attempts to auto-complete your text OR to disable the automatic bit. Instead, you can invoke it on-demand. 9. Interactive Debugging – Change Your Variable Values as You Step Through Your PLSQL Watches aren’t just for watching. You can actually interact with your programs and ‘see what happens’ when X = 256 instead of 1. 10. Ditch the Tree View for the Schema Browser There’s nothing wrong with the Connection tree for browsing your database objects. But some folks just can’t seem to get comfortable with it. So, we built them a Schema Browser that uses a drop down control instead for changing up your schema and object types. Already Know This Stuff, Want More? Just check out my SQL Developer resource page, it’s one of the main links on the top of this page. Or if you can’t find something, just drop me a note in the form of a comment on this page and I’ll do my best to find it or write it for you.

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • Installing SOA Suite 11.1.1.3

    - by James Taylor
    With the release of Oracle SOA Suite 11.1.1.3 last week (28 April 2010) I thought I would attempt to implement a complete SOA Environment with SOA Suite, BPM and OSB on the WLS infrastructure. One major point of difference with the 11.1.1.3 is that is is released as a point release so you must have 11.1.1.2 installed first, then upgrade to 11.1.1.3. This post is performing the upgrade on Linux, if upgrading on windows you will need to substitute the directories and files accordingly. This post assumes that you have SOA Suite 11.1.1.2 installed already. 1. Download 11.1.1.3 software from the following site: http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html WLS 11.1.1.3   RCU 11.1.1.3 SOA Suite 11.1.1.3 OSB 11.1.1.3 Copy files to a staging area. For the purpose of this document the staging area is: /u01/stage  2. Shutdown your existing SOA Suite 11.1.1.2 environment 3. Execute the WLS 11.1.1.3 install from the stage directory. wls1033_linux32.bin 4. Choose the existing 11.1.1.2 Middleware Home 5. Ignore the security update notification 6. Accept the default products to be upgraded. 7. Upgrade of WebLogic has been completed   8. Upgrade the SOA Suite database schemas using the RCU utility. Unzip the RCU utility into the staging area and run the install ./u01/stage/rcuHome/bin/rcu 9. Drop the existing Repository and provide connection details 9. Install SOA Suite patch set 11.1.1.3. Unzip the SOA Suite patchset and execute the runInstaller with the following command. ./u01/stage/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 10. Choose the existing 11.1.1.2 middleware home 11. Start Install 12. Your SOA Suite Install should now be completed. Now we need to update the database repository. Login to SQLPlus as sysdba and execute the following command. SELECT version, status FROM schema_version_registry where owner = 'DEV_SOAINFRA'; the result should be similar to this: VERSION                        STATUS      OWNER ------------------------------ ----------- ------------------------------ 11.1.1.2.0                     VALID       DEV_SOAINFRA As you can see the version if these repositories are still at 11.1.1.2. 13. To upgrade these versions you have 2 options. 1 install via RCU, but this will remove any existing services. The second option is to use the Patch Set Assistant. From the $MW_HOME directory run the following command ./Oracle_SOA1/bin/psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA 14. Install OSB. For the OSB install I did not install the IDE, or the Examples. run the runInstaller from the command line, unzip the OSB download to the stage area. ./u01/stage/osb/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 15. Choose Custom Install NOT to install the IDE (Eclipse) or Examples. 16. Unselect the, Examples and IDE checkboxes. 17. Accept the defaults and start installing. 18. Once the install has been completed configure the domain by running the Configuration Wizard. $MW_HOME/oracle_common/common/bin/config.sh You can create a new domain. In this document I will extend the soa_domain. 19. Select the following from the check list. I have selected the BPM Suite, this is unrelated to OSB but wanted it for my development purposes. To use this functionality additional license are required. 20. Configure the database connectivity. 21. Configure the database connectivity for the OSB schema. 22. Accept the defaults if installing on standard machine, if you require a cluster or advanced configuration then choose the option for you. 23. Upgrade is complete and OSB has been installed. Now you can start your environment.

    Read the article

  • Keeping track of File System Utilization in Ops Center 12c

    - by S Stelting
    Enterprise Manager Ops Center 12c provides significant monitoring capabilities, combined with very flexible incident management. These capabilities even extend to monitoring the file systems associated with Solaris or Linux assets. Depending on your needs you can monitor and manage incidents, or you can fine tune alert monitoring rules to specific file systems. This article will show you how to use Ops Center 12c to Track file system utilization Adjust file system monitoring rules Disable file system rules Create custom monitoring rules If you're interested in this topic, please join us for a WebEx presentation! Date: Thursday, November 8, 2012 Time: 11:00 am, Eastern Standard Time (New York, GMT-05:00) Meeting Number: 598 796 842 Meeting Password: oracle123 To join the online meeting ------------------------------------------------------- 1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&RT=MiMxMQ%3D%3D 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: oracle123 4. Click "Join". To view in other time zones or languages, please click the link: https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833597&UID=1512095432&PW=NOWQ3YjJlMmYy&ORT=MiMxMQ%3D%3D   Monitoring File Systems for OS Assets The Libraries tab provides basic, device-level information about the storage associated with an OS instance. This tab shows you the local file system associated with the instance and any shared storage libraries mounted by Ops Center. More detailed information about file system storage is available under the Analytics tab under the sub-tab named Charts. Here, you can select and display the individual mount points of an OS, and export the utilization data if desired: In this example, the OS instance has a basic root file partition and several NFS directories. Each file system mount point can be independently chosen for display in the Ops Center chart. File Systems and Incident  Reporting Every asset managed by Ops Center has a "monitoring policy", which determines what represents a reportable issue with the asset. The policy is made up of a bunch of monitoring rules, where each rule describes An attribute to monitor The conditions which represent an issue The level or levels of severity for the issue When the conditions are met, Ops Center sends a notification and creates an incident. By default, OS instances have three monitoring rules associated with file systems: File System Reachability: Triggers an incident if a file system is not reachable NAS Library Status: Triggers an incident for a value of "WARNING" or "DEGRADED" for a NAS-based file system File System Used Space Percentage: Triggers an incident when file system utilization grows beyond defined thresholds You can view these rules in the Monitoring tab for an OS: Of course, the default monitoring rules is that they apply to every file system associated with an OS instance. As a result, any issue with NAS accessibility or disk utilization will trigger an incident. This can cause incidents for file systems to be reported multiple times if the same shared storage is used by many assets, as shown in this screen shot: Depending on the level of control you'd like, there are a number of ways to fine tune incident reporting. Note that any changes to an asset's monitoring policy will detach it from the default, creating a new monitoring policy for the asset. If you'd like, you can extract a monitoring policy from an asset, which allows you to save it and apply the customized monitoring profile to other OS assets. Solution #1: Modify the Reporting Thresholds In some cases, you may want to modify the basic conditions for incident reporting in your file system. The changes you make to a default monitoring rule will apply to all of the file systems associated with your operating system. Selecting the File Systems Used Space Percentage entry and clicking the "Edit Alert Monitoring Rule Parameters" button opens a pop-up dialog which allows you to modify the rule. The first screen lets you decide when you will check for file system usage, and how long you will wait before opening an incident in Ops Center. By default, Ops Center monitors continuously and reports disk utilization issues which exist for more than 15 minutes. The second screen lets you define actual threshold values. By default, Ops Center opens a Warning level incident is utilization rises above 80%, and a Critical level incident for utilization above 95% Solution #2: Disable Incident Reporting for File System If you'd rather not report file system incidents, you can disable the monitoring rules altogether. In this case, you can select the monitoring rules and click the "Disable Alert Monitoring Rule(s)" button to open the pop-up confirmation dialog. Like the first solution, this option affects all file system monitoring. It allows you to completely disable incident reporting for NAS library status or file system space consumption. Solution #3: Create New Monitoring Rules for Specific File Systems If you'd like to have the greatest flexibility when monitoring file systems, you can create entirely new rules. Clicking the "Add Alert Monitoring Rule" (the icon with the green plus sign) opens a wizard which allows you to define a new rule.  This rule will be based on a threshold, and will be used to monitor operating system assets. We'd like to add a rule to track disk utilization for a specific file system - the /nfs-guest directory. To do this, we specify the following attribute FileSystemUsages.name=/nfs-guest.usedSpacePercentage The value of name in the attribute allows us to define a specific NFS shared directory or file system... in the case of this OS, we could have chosen any of the values shown in the File Systems Utilization chart at the beginning of this article. usedSpacePercentage lets us define a threshold based on the percentage of total disk space used. There are a number of other values that we could use for threshold-based monitoring of FileSystemUsages, including freeSpace freeSpacePercentage totalSpace usedSpace usedSpacePercentage The final sections of the screen allow us to determine when to monitor for disk usage, and how long to wait after utilization reaches a threshold before creating an incident. The next screen lets us define the threshold values and severity levels for the monitoring rule: If historical data is available, Ops Center will display it in the screen. Clicking the Apply button will create the new monitoring rule and active it in your monitoring policy. If you combine this with one of the previous solutions, you can precisely define which file systems will generate incidents and notifications. For example, this monitoring policy has the default "File System Used Space Percentage" rule disabled, but the new rule reports ONLY on utilization for the /nfs-guest directory. Stay Connected: Twitter |  Facebook |  YouTube |  Linkedin |  Newsletter

    Read the article

  • ca-certificates-java fails when trying to install openjdk-6-jre

    - by Jonas
    I use a VPS with Ubuntu Server 10.10 x64. I want to use Java and run the command sudo apt-get install openjdk-6-jre but it fails because the installation encounted errors while processing ca-certificates-java. I have tried to install the failed package with: sudo apt-get install ca-certificates-java How can I solve this? I have run sudo apt-get update and sudo apt-get upgrade but I get the same errors after that. I have also installed Ubuntu Server x64 on a VirtualBox, but the two Ubuntu Server 10.10 has different kernel versions (2.6.35 on VirtualBox and 2.6.18 on my VPS). And on VirtualBox I can install Jetty without any problems. The VPS is a fresh install of Ubuntu Server 10.10 x64, the first command I was running was sudo apt-get install openjdk-6-jre. When I run sudo apt-get install ca-certificates-java I get this message: Reading package lists... Done Building dependency tree Reading state information... Done ca-certificates-java is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Do you want to continue [Y/n]? Here I press Y then I get this message: Setting up ca-certificates-java (20100412) ... creating /etc/ssl/certs/java/cacerts... error adding brasil.gov.br/brasil.gov.br.crt error adding cacert.org/cacert.org.crt error adding debconf.org/ca.crt error adding gouv.fr/cert_igca_dsa.crt error adding gouv.fr/cert_igca_rsa.crt error adding mozilla/ABAecom_=sub.__Am._Bankers_Assn.=_Root_CA.crt error adding mozilla/AOL_Time_Warner_Root_Certification_Authority_1.crt error adding mozilla/AOL_Time_Warner_Root_Certification_Authority_2.crt error adding mozilla/AddTrust_External_Root.crt error adding mozilla/AddTrust_Low-Value_Services_Root.crt error adding mozilla/AddTrust_Public_Services_Root.crt error adding mozilla/AddTrust_Qualified_Certificates_Root.crt error adding mozilla/America_Online_Root_Certification_Authority_1.crt error adding mozilla/America_Online_Root_Certification_Authority_2.crt error adding mozilla/Baltimore_CyberTrust_Root.crt error adding mozilla/COMODO_Certification_Authority.crt error adding mozilla/COMODO_ECC_Certification_Authority.crt error adding mozilla/Camerfirma_Chambers_of_Commerce_Root.crt error adding mozilla/Camerfirma_Global_Chambersign_Root.crt error adding mozilla/Certplus_Class_2_Primary_CA.crt error adding mozilla/Certum_Root_CA.crt error adding mozilla/Comodo_AAA_Services_root.crt error adding mozilla/Comodo_Secure_Services_root.crt error adding mozilla/Comodo_Trusted_Services_root.crt error adding mozilla/DST_ACES_CA_X6.crt error adding mozilla/DST_Root_CA_X3.crt error adding mozilla/DigiCert_Assured_ID_Root_CA.crt error adding mozilla/DigiCert_Global_Root_CA.crt error adding mozilla/DigiCert_High_Assurance_EV_Root_CA.crt error adding mozilla/DigiNotar_Root_CA.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_1.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_2.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_3.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_4.crt error adding mozilla/Entrust.net_Global_Secure_Personal_CA.crt error adding mozilla/Entrust.net_Global_Secure_Server_CA.crt error adding mozilla/Entrust.net_Premium_2048_Secure_Server_CA.crt error adding mozilla/Entrust.net_Secure_Personal_CA.crt error adding mozilla/Entrust.net_Secure_Server_CA.crt error adding mozilla/Entrust_Root_Certification_Authority.crt error adding mozilla/Equifax_Secure_CA.crt error adding mozilla/Equifax_Secure_Global_eBusiness_CA.crt error adding mozilla/Equifax_Secure_eBusiness_CA_1.crt error adding mozilla/Equifax_Secure_eBusiness_CA_2.crt error adding mozilla/Firmaprofesional_Root_CA.crt error adding mozilla/GTE_CyberTrust_Global_Root.crt error adding mozilla/GTE_CyberTrust_Root_CA.crt error adding mozilla/GeoTrust_Global_CA.crt error adding mozilla/GeoTrust_Global_CA_2.crt error adding mozilla/GeoTrust_Primary_Certification_Authority.crt error adding mozilla/GeoTrust_Universal_CA.crt error adding mozilla/GeoTrust_Universal_CA_2.crt error adding mozilla/GlobalSign_Root_CA.crt error adding mozilla/GlobalSign_Root_CA_-_R2.crt error adding mozilla/Go_Daddy_Class_2_CA.crt error adding mozilla/IPS_CLASE1_root.crt error adding mozilla/IPS_CLASE3_root.crt error adding mozilla/IPS_CLASEA1_root.crt error adding mozilla/IPS_CLASEA3_root.crt error adding mozilla/IPS_Chained_CAs_root.crt error adding mozilla/IPS_Servidores_root.crt error adding mozilla/IPS_Timestamping_root.crt error adding mozilla/NetLock_Business_=Class_B=_Root.crt error adding mozilla/NetLock_Express_=Class_C=_Root.crt error adding mozilla/NetLock_Notary_=Class_A=_Root.crt error adding mozilla/NetLock_Qualified_=Class_QA=_Root.crt error adding mozilla/Network_Solutions_Certificate_Authority.crt error adding mozilla/QuoVadis_Root_CA.crt error adding mozilla/QuoVadis_Root_CA_2.crt error adding mozilla/QuoVadis_Root_CA_3.crt error adding mozilla/RSA_Root_Certificate_1.crt error adding mozilla/RSA_Security_1024_v3.crt error adding mozilla/RSA_Security_2048_v3.crt error adding mozilla/SecureTrust_CA.crt error adding mozilla/Secure_Global_CA.crt error adding mozilla/Security_Communication_Root_CA.crt error adding mozilla/Sonera_Class_1_Root_CA.crt error adding mozilla/Sonera_Class_2_Root_CA.crt error adding mozilla/Staat_der_Nederlanden_Root_CA.crt error adding mozilla/Starfield_Class_2_CA.crt error adding mozilla/StartCom_Certification_Authority.crt error adding mozilla/StartCom_Ltd..crt error adding mozilla/SwissSign_Gold_CA_-_G2.crt error adding mozilla/SwissSign_Platinum_CA_-_G2.crt error adding mozilla/SwissSign_Silver_CA_-_G2.crt error adding mozilla/Swisscom_Root_CA_1.crt error adding mozilla/TC_TrustCenter__Germany__Class_2_CA.crt error adding mozilla/TC_TrustCenter__Germany__Class_3_CA.crt error adding mozilla/TDC_Internet_Root_CA.crt error adding mozilla/TDC_OCES_Root_CA.crt error adding mozilla/TURKTRUST_Certificate_Services_Provider_Root_1.crt error adding mozilla/TURKTRUST_Certificate_Services_Provider_Root_2.crt error adding mozilla/Taiwan_GRCA.crt error adding mozilla/Thawte_Personal_Basic_CA.crt error adding mozilla/Thawte_Personal_Freemail_CA.crt error adding mozilla/Thawte_Personal_Premium_CA.crt error adding mozilla/Thawte_Premium_Server_CA.crt error adding mozilla/Thawte_Server_CA.crt error adding mozilla/Thawte_Time_Stamping_CA.crt error adding mozilla/UTN-USER_First-Network_Applications.crt error adding mozilla/UTN_DATACorp_SGC_Root_CA.crt error adding mozilla/UTN_USERFirst_Email_Root_CA.crt error adding mozilla/UTN_USERFirst_Hardware_Root_CA.crt error adding mozilla/ValiCert_Class_1_VA.crt error adding mozilla/ValiCert_Class_2_VA.crt error adding mozilla/VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_RSA_Secure_Server_CA.crt error adding mozilla/Verisign_Time_Stamping_Authority_CA.crt error adding mozilla/Visa_International_Global_Root_2.crt error adding mozilla/Visa_eCommerce_Root.crt error adding mozilla/WellsSecure_Public_Root_Certificate_Authority.crt error adding mozilla/Wells_Fargo_Root_CA.crt error adding mozilla/XRamp_Global_CA_Root.crt error adding mozilla/beTRUSTed_Root_CA-Baltimore_Implementation.crt error adding mozilla/beTRUSTed_Root_CA.crt error adding mozilla/beTRUSTed_Root_CA_-_Entrust_Implementation.crt error adding mozilla/beTRUSTed_Root_CA_-_RSA_Implementation.crt error adding mozilla/thawte_Primary_Root_CA.crt error adding signet.pl/signet_ca1_pem.crt error adding signet.pl/signet_ca2_pem.crt error adding signet.pl/signet_ca3_pem.crt error adding signet.pl/signet_ocspklasa2_pem.crt error adding signet.pl/signet_ocspklasa3_pem.crt error adding signet.pl/signet_pca2_pem.crt error adding signet.pl/signet_pca3_pem.crt error adding signet.pl/signet_rootca_pem.crt error adding signet.pl/signet_tsa1_pem.crt error adding spi-inc.org/spi-ca-2003.crt error adding spi-inc.org/spi-cacert-2008.crt error adding telesec.de/deutsche-telekom-root-ca-2.crt failed (VM used: java-6-openjdk). dpkg: error processing ca-certificates-java (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ca-certificates-java E: Sub-process /usr/bin/dpkg returned an error code (1) Update I also get a problem when running java -version: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. My VPS had 128MB of Memory, I changed to 256MB but got the same problem. Then I changed to 512MB and got the same problem. I found a related post on a forum: Sub-process /usr/bin/dpkg returned an error code (1) And I tried: sudo apt-get clean sudo apt-get --reinstall install openjdk-6-jre sudo dpkg --configure -a But I got the same problem, even when I'm using 512MB of Memory. Any suggestions?

    Read the article

  • Parallelism in .NET – Part 17, Think Continuations, not Callbacks

    - by Reed
    In traditional asynchronous programming, we’d often use a callback to handle notification of a background task’s completion.  The Task class in the Task Parallel Library introduces a cleaner alternative to the traditional callback: continuation tasks. Asynchronous programming methods typically required callback functions.  For example, MSDN’s Asynchronous Delegates Programming Sample shows a class that factorizes a number.  The original method in the example has the following signature: public static bool Factorize(int number, ref int primefactor1, ref int primefactor2) { //... .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, calling this is quite “tricky”, even if we modernize the sample to use lambda expressions via C# 3.0.  Normally, we could call this method like so: int primeFactor1 = 0; int primeFactor2 = 0; bool answer = Factorize(10298312, ref primeFactor1, ref primeFactor2); Console.WriteLine("{0}/{1} [Succeeded {2}]", primeFactor1, primeFactor2, answer); If we want to make this operation run in the background, and report to the console via a callback, things get tricker.  First, we need a delegate definition: public delegate bool AsyncFactorCaller( int number, ref int primefactor1, ref int primefactor2); Then we need to use BeginInvoke to run this method asynchronously: int primeFactor1 = 0; int primeFactor2 = 0; AsyncFactorCaller caller = new AsyncFactorCaller(Factorize); caller.BeginInvoke(10298312, ref primeFactor1, ref primeFactor2, result => { int factor1 = 0; int factor2 = 0; bool answer = caller.EndInvoke(ref factor1, ref factor2, result); Console.WriteLine("{0}/{1} [Succeeded {2}]", factor1, factor2, answer); }, null); This works, but is quite difficult to understand from a conceptual standpoint.  To combat this, the framework added the Event-based Asynchronous Pattern, but it isn’t much easier to understand or author. Using .NET 4’s new Task<T> class and a continuation, we can dramatically simplify the implementation of the above code, as well as make it much more understandable.  We do this via the Task.ContinueWith method.  This method will schedule a new Task upon completion of the original task, and provide the original Task (including its Result if it’s a Task<T>) as an argument.  Using Task, we can eliminate the delegate, and rewrite this code like so: var background = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); background.ContinueWith(task => Console.WriteLine("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result)); This is much simpler to understand, in my opinion.  Here, we’re explicitly asking to start a new task, then continue the task with a resulting task.  In our case, our method used ref parameters (this was from the MSDN Sample), so there is a little bit of extra boiler plate involved, but the code is at least easy to understand. That being said, this isn’t dramatically shorter when compared with our C# 3 port of the MSDN code above.  However, if we were to extend our requirements a bit, we can start to see more advantages to the Task based approach.  For example, supposed we need to report the results in a user interface control instead of reporting it to the Console.  This would be a common operation, but now, we have to think about marshaling our calls back to the user interface.  This is probably going to require calling Control.Invoke or Dispatcher.Invoke within our callback, forcing us to specify a delegate within the delegate.  The maintainability and ease of understanding drops.  However, just as a standard Task can be created with a TaskScheduler that uses the UI synchronization context, so too can we continue a task with a specific context.  There are Task.ContinueWith method overloads which allow you to provide a TaskScheduler.  This means you can schedule the continuation to run on the UI thread, by simply doing: Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }).ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), TaskScheduler.FromCurrentSynchronizationContext()); This is far more understandable than the alternative.  By using Task.ContinueWith in conjunction with TaskScheduler.FromCurrentSynchronizationContext(), we get a simple way to push any work onto a background thread, and update the user interface on the proper UI thread.  This technique works with Windows Presentation Foundation as well as Windows Forms, with no change in methodology.

    Read the article

  • Using the HTML5 &lt;input type=&quot;file&quot; multiple=&quot;multiple&quot;&gt; Tag in ASP.NET

    - by Rick Strahl
    Per HTML5 spec the <input type="file" /> tag allows for multiple files to be picked from a single File upload button. This is actually a very subtle change that's very useful as it makes it much easier to send multiple files to the server without using complex uploader controls. Please understand though, that even though you can send multiple files using the <input type="file" /> tag, the process of how those files are sent hasn't really changed - there's still no progress information or other hooks that allow you to automatically make for a nicer upload experience without additional libraries or code. For that you will still need some sort of library (I'll post an example in my next blog post using plUpload). All the new features allow for is to make it easier to select multiple images from disk in one operation. Where you might have required many file upload controls before to upload several files, one File control can potentially do the job. How it works To create a file input box that allows with multiple file support you can simply do:<form method="post" enctype="multipart/form-data"> <label>Upload Images:</label> <input type="file" multiple="multiple" name="File1" id="File1" accept="image/*" /> <hr /> <input type="submit" id="btnUpload" value="Upload Images" /> </form> Now when the file open dialog pops up - depending on the browser and whether the browser supports it - you can pick multiple files. Here I'm using Firefox using the thumbnail preview I can easily pick images to upload on a form: Note that I can select multiple images in the dialog all of which get stored in the file textbox. The UI for this can be different in some browsers. For example Chrome displays 3 files selected as text next to the Browse… button when I choose three rather than showing any files in the textbox. Most other browsers display the standard file input box and display the multiple filenames as a comma delimited list in the textbox. Note that you can also specify the accept attribute in the <input> tag, which specifies a mime-type to specify what type of content to allow.Here I'm only allowing images (image/*) and the browser complies by just showing me image files to display. Likewise I could use text/* for all text formats registered on the machine or text/xml to only show XML files (which would include xml,xst,xsd etc.). Capturing Files on the Server with ASP.NET When you upload files to an ASP.NET server there are a couple of things to be aware of. When multiple files are uploaded from a single file control, they are assigned the same name. In other words if I select 3 files to upload on the File1 control shown above I get three file form variables named File1. This means I can't easily retrieve files by their name:HttpPostedFileBase file = Request.Files["File1"]; because there will be multiple files for a given name. The above only selects the first file. Instead you can only reliably retrieve files by their index. Below is an example I use in app to capture a number of images uploaded and store them into a database using a business object and EF 4.2.for (int i = 0; i < Request.Files.Count; i++) { HttpPostedFileBase file = Request.Files[i]; if (file.ContentLength == 0) continue; if (file.ContentLength > App.Configuration.MaxImageUploadSize) { ErrorDisplay.ShowError("File " + file.FileName + " is too large. Max upload size is: " + App.Configuration.MaxImageUploadSize); return View("UploadClassic",model); } var image = new ClassifiedsBusiness.Image(); var ms = new MemoryStream(16498); file.InputStream.CopyTo(ms); image.Entered = DateTime.Now; image.EntryId = model.Entry.Id; image.ContentType = "image/jpeg"; image.ImageData = ms.ToArray(); ms.Seek(0, SeekOrigin.Begin); // resize image if necessary and turn into jpeg Bitmap bmp = Imaging.ResizeImage(ms.ToArray(), App.Configuration.MaxImageWidth, App.Configuration.MaxImageHeight); ms.Close(); ms = new MemoryStream(); bmp.Save(ms,ImageFormat.Jpeg); image.ImageData = ms.ToArray(); bmp.Dispose(); ms.Close(); model.Entry.Images.Add(image); } This works great and also allows you to capture input from multiple input controls if you are dealing with browsers that don't support multiple file selections in the file upload control. The important thing here is that I iterate over the files by index, rather than using a foreach loop over the Request.Files collection. The files collection returns key name strings, rather than the actual files (who thought that was good idea at Microsoft?), and so that isn't going to work since you end up getting multiple keys with the same name. Instead a plain for loop has to be used to loop over all files. Another Option in ASP.NET MVC If you're using ASP.NET MVC you can use the code above as well, but you have yet another option to capture multiple uploaded files by using a parameter for your post action method.public ActionResult Save(HttpPostedFileBase[] file1) { foreach (var file in file1) { if (file.ContentLength < 0) continue; // do something with the file }} Note that in order for this to work you have to specify each posted file variable individually in the parameter list. This works great if you have a single file upload to deal with. You can also pass this in addition to your main model to separate out a ViewModel and a set of uploaded files:public ActionResult Edit(EntryViewModel model,HttpPostedFileBase[] uploadedFile) You can also make the uploaded files part of the ViewModel itself - just make sure you use the appropriate naming for the variable name in the HTML document (since there's Html.FileFor() extension). Browser Support You knew this was coming, right? The feature is really nice, but unfortunately not supported universally yet. Once again Internet Explorer is the problem: No shipping version of Internet Explorer supports multiple file uploads. IE10 supposedly will, but even IE9 does not. All other major browsers - Chrome, Firefox, Safari and Opera - support multi-file uploads in their latest versions. So how can you handle this? If you need to provide multiple file uploads you can simply add multiple file selection boxes and let people either select multiple files with a single upload file box or use multiples. Alternately you can do some browser detection and if IE is used simply show the extra file upload boxes. It's not ideal, but either one of these approaches makes life easier for folks that use a decent browser and leaves you with a functional interface for those that don't. Here's a UI I recently built as an alternate uploader with multiple file upload buttons: I say this is my 'alternate' uploader - for my primary uploader I continue to use an add-in solution. Specifically I use plUpload and I'll discuss how that's implemented in my next post. Although I think that plUpload (and many of the other packaged JavaScript upload solutions) are a better choice especially for large uploads, for simple one file uploads input boxes work well enough. The advantage of this solution is that it's very easy to handle on the server side. Any of the JavaScript controls require special handling for uploads which I'll also discuss in my next post.© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Clean up after Visual Studio

    - by psheriff
    As programmer’s we know that if we create a temporary file during the running of our application we need to make sure it is removed when the application or process is complete. We do this, but why can’t Microsoft do it? Visual Studio leaves tons of temporary files all over your hard drive. This is why, over time, your computer loses hard disk space. This blog post will show you some of the most common places where these files are left and which ones you can safely delete..NET Left OversVisual Studio is a great development environment for creating applications quickly. However, it will leave a lot of miscellaneous files all over your hard drive. There are a few locations on your hard drive that you should be checking to see if there are left-over folders or files that you can delete. I have attempted to gather as much data as I can about the various versions of .NET and operating systems. Of course, your mileage may vary on the folders and files I list here. In fact, this problem is so prevalent that PDSA has created a Computer Cleaner specifically for the Visual Studio developer.  Instructions for downloading our PDSA Developer Utilities (of which Computer Cleaner is one) are at the end of this blog entry.Each version of Visual Studio will create “temporary” files in different folders. The problem is that the files created are not always “temporary”. Most of the time these files do not get cleaned up like they should. Let’s look at some of the folders that you should periodically review and delete files within these folders.Temporary ASP.NET FilesAs you create and run ASP.NET applications from Visual Studio temporary files are placed into the <sysdrive>:\Windows\Microsoft.NET\Framework[64]\<vernum>\Temporary ASP.NET Files folder. The folders and files under this folder can be removed with no harm to your development computer. Do not remove the "Temporary ASP.NET Files" folder itself, just the folders underneath this folder. If you use IIS for ASP.NET development, you may need to run the iisreset.exe utility from the command prompt prior to deleting any files/folder under this folder. IIS will sometimes keep files in use in this folder and iisreset will release the locks so the files/folders can be deleted.Website CacheThis folder is similar to the ASP.NET Temporary Files folder in that it contains files from ASP.NET applications run from Visual Studio. This folder is located in each users local settings folder. The location will be a little different on each operating system. For example on Windows Vista/Windows 7, the folder is located at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\WebsiteCache. If you are running Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\WebsiteCache. Check these locations periodically and delete all files and folders under this directory.Visual Studio BackupThis backup folder is used by Visual Studio to store temporary files while you develop in Visual Studio. This folder never gets cleaned out, so you should periodically delete all files and folders under this directory. On Windows XP, this folder is located at <sysdrive>:\Documents and Settings\<UserName>\My Documents\Visual Studio 200[5|8]\Backup Files. On Windows Vista/Windows 7 this folder is located at <sysdrive>:\Users\<UserName>\Documents\Visual Studio 200[5|8]\.Assembly CacheNo, this is not the global assembly cache (GAC). It appears that this cache is only created when doing WPF or Silverlight development with Visual Studio 2008 or Visual Studio 2010. This folder is located in <sysdrive>:\ Users\<UserName>\AppData\Local\assembly\dl3 on Windows Vista/Windows 7. On Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\assembly. If you have not done any WPF or Silverlight development, you may not find this particular folder on your machine.Project AssembliesThis is yet another folder where Visual Studio stores temporary files. You will find a folder for each project you have opened and worked on. This folder is located at <sysdrive>:\Documents and Settings\<UserName>Local Settings\Application Data\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies on Windows XP. On Microsoft Vista/Windows 7 you will find this folder at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies.Remember not all of these folders will appear on your particular machine. Which ones do show up will depend on what version of Visual Studio you are using, whether or not you are doing desktop or web development, and the operating system you are using.SummaryTaking the time to periodically clean up after Visual Studio will aid in keeping your computer running quickly and increase the space on your hard drive. Another place to make sure you are cleaning up is your TEMP folder. Check your OS settings for the location of your particular TEMP folder and be sure to delete any files in here that are not in use. I routinely clean up the files and folders described in this blog post and I find that I actually eliminate errors in Visual Studio and I increase my hard disk space.NEW! PDSA has just published a “pre-release” of our PDSA Developer Utilities at http://www.pdsa.com/DeveloperUtilities that contains a Computer Cleaner utility which will clean up the above-mentioned folders, as well as a lot of other miscellaneous folders that get Visual Studio build-up. You can download a free trial at http://www.pdsa.com/DeveloperUtilities. If you wish to purchase our utilities through the month of November, 2011 you can use the RSVP code: DUNOV11 to get them for only $39. This is $40 off the regular price.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Developer Machine Clean Up” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • C# 4.0: Covariance And Contravariance In Generics

    - by Paulo Morgado
    C# 4.0 (and .NET 4.0) introduced covariance and contravariance to generic interfaces and delegates. But what is this variance thing? According to Wikipedia, in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometrical or physical entities changes when passing from one coordinate system to another.(*) But what does this have to do with C# or .NET? In type theory, a the type T is greater (>) than type S if S is a subtype (derives from) T, which means that there is a quantitative description for types in a type hierarchy. So, how does covariance and contravariance apply to C# (and .NET) generic types? In C# (and .NET), variance applies to generic type parameters and not to the resulting generic type. A generic type parameter is: covariant if the ordering of the generic types follows the ordering of the generic type parameters: Generic<T> = Generic<S> for T = S. contravariant if the ordering of the generic types is reversed from the ordering of the generic type parameters: Generic<T> = Generic<S> for T = S. invariant if neither of the above apply. If this definition is applied to arrays, we can see that arrays have always been covariant because this is valid code: object[] objectArray = new string[] { "string 1", "string 2" }; objectArray[0] = "string 3"; objectArray[1] = new object(); However, when we try to run this code, the second assignment will throw an ArrayTypeMismatchException. Although the compiler was fooled into thinking this was valid code because an object is being assigned to an element of an array of object, at run time, there is always a type check to guarantee that the runtime type of the definition of the elements of the array is greater or equal to the instance being assigned to the element. In the above example, because the runtime type of the array is array of string, the first assignment of array elements is valid because string = string and the second is invalid because string = object. This leads to the conclusion that, although arrays have always been covariant, they are not safely covariant – code that compiles is not guaranteed to run without errors. In C#, the way to define that a generic type parameter as covariant is using the out generic modifier: public interface IEnumerable<out T> { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> { T Current { get; } bool MoveNext(); } Notice the convenient use the pre-existing out keyword. Besides the benefit of not having to remember a new hypothetic covariant keyword, out is easier to remember because it defines that the generic type parameter can only appear in output positions — read-only properties and method return values. In a similar way, the way to define a type parameter as contravariant is using the in generic modifier: public interface IComparer<in T> { int Compare(T x, T y); } Once again, the use of the pre-existing in keyword makes it easier to remember that the generic type parameter can only be used in input positions — write-only properties and method non ref and non out parameters. Because covariance and contravariance apply only to the generic type parameters, a generic type definition can have both covariant and contravariant generic type parameters in its definition: public delegate TResult Func<in T, out TResult>(T arg); A generic type parameter that is not marked covariant (out) or contravariant (in) is invariant. All the types in the .NET Framework where variance could be applied to its generic type parameters have been modified to take advantage of this new feature. In summary, the rules for variance in C# (and .NET) are: Variance in type parameters are restricted to generic interface and generic delegate types. A generic interface or generic delegate type can have both covariant and contravariant type parameters. Variance applies only to reference types; if you specify a value type for a variant type parameter, that type parameter is invariant for the resulting constructed type. Variance does not apply to delegate combination. That is, given two delegates of types Action<Derived> and Action<Base>, you cannot combine the second delegate with the first although the result would be type safe. Variance allows the second delegate to be assigned to a variable of type Action<Derived>, but delegates can combine only if their types match exactly. If you want to learn more about variance in C# (and .NET), you can always read: Covariance and Contravariance in Generics — MSDN Library Exact rules for variance validity — Eric Lippert Events get a little overhaul in C# 4, Afterward: Effective Events — Chris Burrows Note: Because variance is a feature of .NET 4.0 and not only of C# 4.0, all this also applies to Visual Basic 10.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Oracle Linux and Oracle VM pricing guide

    - by wcoekaer
    A few days ago someone showed me a pricing guide from a Linux vendor and I was a bit surprised at the complexity of it. Especially when you look at larger servers (4 or 8 sockets) and when adding virtual machine use into the mix. I think we have a very compelling and simple pricing model for both Oracle Linux and Oracle VM. Let me see if I can explain it in 1 page, not 10 pages. This pricing information is publicly available on the Oracle store, I am using the current public list prices. Also keep in mind that this is for customers using non-oracle x86 servers. When a customer purchases an Oracle x86 server, the annual systems support includes full use (all you can eat) of Oracle Linux, Oracle VM and Oracle Solaris (no matter how many VMs you run on that server, in case you deploy guests on a hypervisor). This support level is the equivalent of premier support in the list below. Let's start with Oracle VM (x86) : Oracle VM support subscriptions are per physical server on which you deploy the Oracle VM Server product. (1) Oracle VM Premier Limited - 1- or 2 socket server : $599 per server per year (2) Oracle VM Premier - more than 2 socket server (4, or 8 or whatever more) : $1199 per server per year The above includes the use of Oracle VM Manager and Oracle Enterprise Manager Cloud Control's Virtualization management pack (including self service cloud portal, etc..) 24x7 support, access to bugfixes, updates and new releases. It also includes all options, live migrate, dynamic resource scheduling, high availability, dynamic power management, etc If you want to play with the product, or even use the product without access to support services, the product is freely downloadable from edelivery. Next, Oracle Linux : Oracle Linux support subscriptions are per physical server. If you plan to run Oracle Linux as a guest on Oracle VM, VMWare or Hyper-v, you only have to pay for a single subscription per system, we do not charge per guest or per number of guests. In other words, you can run any number of Oracle Linux guests per physical server and count it as just a single subscription. (1) Oracle Linux Network Support - any number of sockets per server : $119 per server per year Network support does not offer support services. It provides access to the Unbreakable Linux Network and also offers full indemnification for Oracle Linux. (2) Oracle Linux Basic Limited Support - 1- or 2 socket servers : $499 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem. (3) Oracle Linux Basic Support - more than 2 socket server (4, or 8 or more) : $1199 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem (4) Oracle Linux Premier Limited Support - 1- or 2 socket servers : $1399 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (5) Oracle Linux Premier Support - more than 2 socket servers : $2299 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (6) Freely available Oracle Linux - any number of sockets You can freely download Oracle Linux, install it on any number of servers and use it for any reason, without support, without right to use of these extra features like Oracle Clusterware or ksplice, without indemnification. However, you do have full access to all errata as well. Need support? then use options (1)..(5) So that's it. Count number of 2 socket boxes, more than 2 socket boxes, decide on basic or premier support level and you are done. You don't have to worry about different levels based on how many virtual instance you deploy or want to deploy. A very simple menu of choices. We offer, inclusive, Linux OS clusterware, Linux OS Management, provisioning and monitoring, cluster filesystem (ocfs), high performance filesystem (xfs), dtrace, ksplice, ofed (infiniband stack for high performance networking). No separate add-on menus. NOTE : socket/cpu can have any number of cores. So whether you have a 4,6,8,10 or 12 core CPU doesn't matter, we count the number of physical CPUs.

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >