Search Results

Search found 5702 results on 229 pages for 'operating procedures'.

Page 59/229 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • PL/SQL execption and Java programs

    - by edwards
    Hi Business logic is coded in pl/sql paackages procedures and functions. Java programs call pl/sql packages procedures and functions to do database work. Issue now is pl/sql programs store excpetions into Oracle tables whenever a execption is raised. How would my java programs get the execptions since the exception instead of being propogated from pl/sql to java is getting persisted to a oracle table.

    Read the article

  • Global temporary tables getting data from different session in Oracle

    - by Omnipresent
    We have a stored procedure in Oracle that uses global temporary tables. In most of our other stored procedures, first thing we do is delete data from global temporary tables. However, in few of the stored procedures we do not have the delete's. Are there any other options other than adding the delete statements? Can something be done on the Server side to forcefully delete data from those temporary tables when that SP is ran?

    Read the article

  • Wake-On-Lan - is it possible to access boot menu?

    - by truthseeker
    Now imagine this situation. I have a computer which has 2 operating systems installed - Windows XP and Windows 7. On booting menu is displayed where I can select which operating system I want to load. When I do nothing, Windows 7 loads (after some delay). This computer is connected to LAN and has Wake-On-Lan function enabled. Is it possible to remotely select which operating system I wish to load? Now it is Windows 7. But sometimes I wish it would be Windows XP.

    Read the article

  • In a virtual machine monitor such as VMware’s ESXi Server, how are shadow page tables implemented?

    - by ali01
    My understanding is that VMMs such as VMware's ESXi Server maintain shadow page tables to map virtual page addresses of guest operating systems directly to machine (hardware) addresses. I've been told that shadow page tables are then used directly by the processor's paging hardware to allow memory access in the VM to execute without translation overhead. I would like to understand a bit more about how the shadow page table mechanism works in a VMM. Is my high level understanding above correct? What kind of data structures are used in the implementation of shadow page tables? What is the flow of control from the guest operating system all the way to the hardware? How are memory access translations made for a guest operating system before its shadow page table is populated? How is page sharing supported? Short of straight up reading the source code of an open source VMM, what resources can I look into to learn more about hardware virtualization?

    Read the article

  • Handling Model Inheritance in ASP.NET MVC2

    - by enth
    I've gotten myself stuck on how to handle inheritance in my model when it comes to my controllers/views. Basic Model: public class Procedure : Entity { public Procedure() { } public int Id { get; set; } public DateTime ProcedureDate { get; set; } public ProcedureType Type { get; set; } } public ProcedureA : Procedure { public double VariableA { get; set; } public int VariableB { get; set; } public int Total { get; set; } } public ProcedureB : Procedure { public int Score { get; set; } } etc... many of different procedures eventually. So, I do things like list all the procedures: public class ProcedureController : Controller { public virtual ActionResult List() { IEnumerable<Procedure> procedures = _repository.GetAll(); return View(procedures); } } but now I'm kinda stuck. Basically, from the list page, I need to link to pages where the specific subclass details can be viewed/edited and I'm not sure what the best strategy is. I thought I could add an action on the ProcedureController that would conjure up the right subclass by dynamically figuring out what repository to use and loading the subclass to pass to the view. I had to store the class in the ProcedureType object. I had to create/implement a non-generic IRepository since I can't dynamically cast to a generic one. public virtual ActionResult Details(int procedureID) { Procedure procedure = _repository.GetById(procedureID, false); string className = procedure.Type.Class; Type type = Type.GetType(className, true); Type repositoryType = typeof (IRepository<>).MakeGenericType(type); var repository = (IRepository)DependencyRegistrar.Resolve(repositoryType); Entity procedure = repository.GetById(procedureID, false); return View(procedure); } I haven't even started sorting out how the view is going to determine which partial to load to display the subclass details. I'm wondering if this is a good approach? This makes determining the URL easy. It makes reusing the Procedure display code easy. Another approach is specific controllers for each subclass. It simplifies the controller code, but also means many simple controllers for the many procedure subclasses. Can work out the shared Procedure details with a partial view. How to get to construct the URL to get to the controller/action in the first place? Time to not think about it. Hopefully someone can show me the light. Thanks in advance.

    Read the article

  • Dual Boot issues with Windows 7 and Ubuntu

    - by Michael
    I'm finding myself in a rather unique situation. I've read through just about every resource I can find about this and while things have helped me understand some background, I haven't yet been able to find a solution. So I'm asking here. I originally had just a Windows 7 64-bit OS installation on my desktop. Learning that I couldn't do anything with Apache, PHP and MySql from within a 64-bit system, I did some research and found out that I could use Ubuntu. I've installed the latest version: 11.04. I created a CD to install Ubuntu from and the install went just fine. I installed it side-by-side with Windows 7. I can boot into Ubuntu just fine through the dual-boot option. When I reboot to load Windows though, the Grub2 list shows Windows 7 (loader) and when I select this option the Windows System Recovery loads instead of the actual OS. I haven't made it past there because I didn't know what to do. I just shut the computer down and rebooted into Ubuntu. I've been working for the last hour and a half to try to figure out how to boot into the Windows 7 OS and I haven't got a clue. While I'm somewhat proficient with Windows 7, I'm totally new to Ubuntu, so if you do know what needs to happen, please keep it simple enough that I'll be able to understand. Thanks for all your help in advance. Here's the results after using the Boot Info Script: Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== => Grub 2 is installed in the MBR of /dev/sda and looks on the same drive in partition #5 for cbh. => Windows is installed in the MBR of /dev/sdb => Grub 2 is installed in the MBR of /dev/mapper/pdc_bdadcfbdif and looks on the same drive in partition #5 for cbh. sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sda3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy sdb1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD sdb2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sdb3: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /boot/BCD sdb4: _________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb6: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: pdc_bdadcfbdif1: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: /bootmgr /Boot/BCD pdc_bdadcfbdif2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe pdc_bdadcfbdif3: _________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy fuse: mount failed: Device or resource busy mount: unknown filesystem type '' =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sda1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sda2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/sda3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS Drive: sdb ___________________ _____________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sdb1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/sdb2 206,911 1,342,554,688 1,342,347,778 7 HPFS/NTFS /dev/sdb3 1,930,344,448 1,953,521,663 23,177,216 7 HPFS/NTFS /dev/sdb4 1,342,556,158 1,930,344,447 587,788,290 5 Extended /dev/sdb5 1,342,556,160 1,896,806,399 554,250,240 83 Linux /dev/sdb6 1,896,808,448 1,930,344,447 33,536,000 82 Linux swap / Solaris Drive: pdc_bdadcfbdif ___________________ _____________________________________________________ Disk /dev/mapper/pdc_bdadcfbdif: 750.0 GB, 749999947776 bytes 255 heads, 63 sectors/track, 91182 cylinders, total 1464843648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/mapper/pdc_bdadcfbdif1 * 2,048 206,847 204,800 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif2 206,911 1,440,372,735 1,440,165,825 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 1,440,372,736 1,464,856,575 24,483,840 7 HPFS/NTFS /dev/mapper/pdc_bdadcfbdif3 ends after the last sector of /dev/mapper/pdc_bdadcfbdif blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/mapper/pdc_bdadcfbdif1 888E54CC8E54B482 ntfs SYSTEM /dev/mapper/pdc_bdadcfbdif2 C2766BF6766BEA1D ntfs OS /dev/mapper/pdc_bdadcfbdif: PTTYPE="dos" /dev/sda1 888E54CC8E54B482 ntfs SYSTEM /dev/sda2 C2766BF6766BEA1D ntfs OS /dev/sda3 BE6CA31D6CA2CF87 ntfs HP_RECOVERY /dev/sda promise_fasttrack_raid_member /dev/sdb1 20B65685B6565B7C ntfs SYSTEM /dev/sdb2 B4467A314679F508 ntfs HP /dev/sdb3 6E10B7A410B77227 ntfs FACTORY_IMAGE /dev/sdb4: PTTYPE="dos" /dev/sdb5 266f9801-cf4f-4acc-affa-2092be035f0c ext4 /dev/sdb6 1df35749-a887-45ff-a3de-edd52239847d swap /dev/sdb: PTTYPE="dos" error: /dev/mapper/pdc_bdadcfbdif3: No such file or directory error: /dev/sdc: No medium found error: /dev/sdd: No medium found error: /dev/sde: No medium found error: /dev/sdf: No medium found error: /dev/sdg: No medium found ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sdb5 / ext4 (rw,errors=remount-ro,commit=0) =========================== sdb5/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc- affa-2092be035f0c ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic-pae } menuentry 'Ubuntu, with Linux 2.6.38-8-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c echo 'Loading Linux 2.6.38-8-generic-pae ...' linux /boot/vmlinuz-2.6.38-8-generic-pae root=UUID=266f9801-cf4f-4acc-affa-2092be035f0c ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic-pae } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/sdb,msdos5)' search --no-floppy --fs-uuid --set=root 266f9801-cf4f-4acc-affa-2092be035f0c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sdb1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos1)' search --no-floppy --fs-uuid --set=root 20B65685B6565B7C chainloader +1 } menuentry "Windows Recovery Environment (loader) (on /dev/sdb3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(/dev/sdb,msdos3)' search --no-floppy --fs-uuid --set=root 6E10B7A410B77227 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb5/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb5 during installation UUID=266f9801-cf4f-4acc-affa-2092be035f0c / ext4 errors=remount-ro 0 1 # swap was on /dev/sdb6 during installation UUID=1df35749-a887-45ff-a3de-edd52239847d none swap sw 0 0 =================== sdb5: Location of files loaded by Grub: =================== 900.1GB: boot/grub/core.img 825.0GB: boot/grub/grub.cfg 688.7GB: boot/initrd.img-2.6.38-8-generic-pae 688.0GB: boot/vmlinuz-2.6.38-8-generic-pae 688.7GB: initrd.img 688.0GB: vmlinuz =========================== Unknown MBRs/Boot Sectors/etc ======================= Unknown BootLoader on pdc_bdadcfbdif3 =======Devices which don't seem to have a corresponding hard drive============== sdc sdd sde sdf sdg =============================== StdErr Messages: =============================== ERROR: dos: partition address past end of RAID device hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory hexdump: /dev/mapper/pdc_bdadcfbdif3: No such file or directory ERROR: dos: partition address past end of RAID device

    Read the article

  • Oracle VM and JRockit Virtual Edition: Oracle Introduces Java Virtualization Solution for Oracle(R)

    - by adam.hawley
    Since the beginning, we've been talking to customers about how our approach to virtualization is different and more powerful than any other company because Oracle has the "full-stack" of software (and even hardware these days!) to work with to create more comprehensive, more powerful solutions. Having the virtualization layer, two enterprise class operating systems in Solaris and Enterprise Linux, and the leading enterprise software in nearly every layer of the data center stack, allows us to not just do virtualization for virtualization's sake but rather to provide complete virtualization solutions focused on making enterprise software easier to deploy, easier to manage, and easier to support through integration up and down the stack. Today, we announced the availability of a significant demonstration of that capability by announcing a WebLogic Suite option that permits the Oracle WebLogic Server 11g to run on a Java JVM (JRockit Virtual Edition) that itself runs directly on the Oracle VM Server for x86 / x64 without needing any operating system. Why would you want that? Better performance and better consolidation density, not to mention great security due to a lower "attack surface area". Oracle also announced the Oracle Virtual Assembly Builder product. Oracle Virtual Assembly Builder provides a framework for automatically capturing the configuration of existing software components and packaging them as self-contained building blocks known as appliances. So you know that complex application you've tweaked on your physical servers (or on other virtual environments for that matter)?  Virtual Assembly Builder will allow the automated collection of all the configuration data for the various application components that make up that multi-tier application and then use the information to create and package each component as a virtual machine so that the application can be deployed in your Oracle VM virtualization environment quickly and easily and just as it was configured it in your original environment. A slick, drag-and-drop GUI also serves as a powerful, intuitive interface for viewing and editing your assembly as needed.No one else can do complete virtualization solutions the way Oracle can and I think these offerings show what's possible when you have the right resources for elegantly solving the larger problems in the data center rather than just having to make-do with tools that are only operating at one layer of the stack. For more information, read the press release including the links to more information on various Oracle websites.

    Read the article

  • On mobile is there a reason why processes are often short lived and must persist their state explicitly?

    - by Alexandre Jasmin
    Most mobile platforms (such as Android, iOS, Windows phone 7 and I believe the new WinRT) can kill inactive application processes under memory pressure. To prevent this from affecting the user experience applications are expected to save and restore their state as their process is killed and restarted. Having application processes killed in this way makes the developers job harder. On various occasions I've seen a mobile app that would: Return to the welcome screen each time I switch back to it. Crash when I switch back to it (possibly accessing some state that no longer exists after the process was killed) Misbehave when I switch back to it (sometimes requiring a restart or tasks killer to fix) Otherwise misbehave in some hard to reproduce way (e.g. android service killed and restarted at the wrong time) I don't really understand why these mobile operating systems are designed to kill tasks in this way especially since it makes application development more difficult and error prone. Desktop operating systems don't kill processes like that. They swap out unused pages of memory to mass storage. Is there a reason why the same approach isn't used on mobile? Mobile hardware is only a few years behind PC hardware in term of performance. I'm sure there are very good reasons why mobile operating systems are designed this way. If you can point me to a paper or blog post that explain these reasons or can give me some insight I'd very much appreciate it.

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Is it possible to extend partition in which I have installed Ubuntu

    - by Santosh
    Before installing my current Ubuntu Oneiric Ocelot I was having 3 partitions on my hard drive (each in NTFS, and divided in 21GBs, 10GBs and 37GBs; approx 80GB). I put in the DVD in DVD-ROM and booted up with it. I was prompt whether to install Ubuntu side by current operating system (Windows 7) or totally remove all operating system. I choose to install side by the current operating system. In next screen I took one of my partition (the one which was of 37GB) and totally formatted it. Then created a ext4 file system of 10GBs on it (here 'it' means partition that was of 37GB, not whole HDD). While proceeding I was asked to make swap partition so I created another swap partition of 5GBs. I have Ubuntu installed, but now I am regretting about that 10GB, I should have done it more since I have much more space on my hard drive. And I have 22GBs of my hard drive useless (neither it is in ntfs nor ext4). I want to increase the size of ext4 partition in which I have installed my Ubuntu; from 10GB to something more.

    Read the article

  • You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one

    - by B R Clouse
    Before we examine Consolidation, the next step in the journey to cloud, let's take a short detour to address a critical choice you will face at the outset of your journey: whether to deploy your databases in virtual machines or not. A common misconception we've encountered is the belief that moving to cloud computing can be accomplished by simply hosting one's current operating environment as-is within virtual machines, and then stacking those VMs together in a consolidated environment.  This solution is often described as "Infrastructure as a Service" (IaaS) because the building block for deployments is a VM, which behaves like a full complement of infrastructure.  This approach is easy to understand and may feel like a good first step, but it won't take your databases very far in the journey to cloud computing.  In fact, if you follow the IaaS fork in the road, your journey will end quickly, without realizing the full benefits of cloud computing.  The better option to is to rationalize the deployment stack so that VMs are needed only for exceptional cases.  By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share.  Now, the building block will be database instances or possibly schemas within databases.  These components are the platforms on which you will deploy workloads, hence this is known as "Platform as a Service" (PaaS). PaaS opens the door to higher degrees of consolidation than IaaS, because with PaaS you will not need to accommodate the footprint (operating system, hypervisor, processes, ...) that each VM brings with it.  You will also reduce your maintenance overheard if you move forward without the VMs and their O/Ses to patch and monitor.  So while IaaS simply shuffles complex and varied environments into VMs,  PaaS actually reduces complexity by rationalizing to the small possible set of components.  Now we're ready to look at the consolidation options that PaaS provides -- in our next blog posting.

    Read the article

  • IDC Recommends Oracle Solaris 11

    - by user12611852
    IDC published a research report this week on Oracle Solaris 11 and described it as "Delivering unique value."  The report emphasizes the ability of Oracle Solaris to scale up and provide a mission critical platform for a wide variety of computing. Solaris built-in server and network virtualization helps to lower costs and enable consolidation while reducing administration costs and risks. Learn more about Oracle Solaris and the recently announced 11.1 update. In their conclusion, IDC reports: Today, Oracle is a multi-OS vendor that is adjusting to the opportunities presented by a significantly expanded product portfolio. The company has a long history of supporting Unix operating systems with its broad product portfolio, but the main difference is that now Oracle has direct control over the destiny of the Solaris operating system. The company has made a strong commitment to Solaris on both SPARC and x86 systems, as well as to Linux on x86 systems, and expects to continue to enhance Oracle Solaris 11 with update releases once a year as well as Solaris 12, which is already on the road map. Oracle is working to help its customers understand its strong commitment to Oracle Solaris and the product's role as a single operating system that runs on both SPARC and x86 processors. While Oracle Solaris and Oracle Linux are critical assets, the company's crown jewel is the deep collection of software that runs on top of both Oracle Solaris and Oracle Linux, software that creates a robust application environment. The continuing integration and optimization of the software and hardware stack is a differentiator for Oracle and for customers that run an Oracle Solaris stack.

    Read the article

  • Cannot boot after installing ubuntu on lenovo x120e

    - by tutysara
    I have installed ubuntu 11.10 on my x120e using amd64 alternate iso image. The installation went fine, but it is having issues while booting after a successful installation, it says - "No Operating System Found". I followed the instruction at - help.ubuntu.com/community/X120e#Installation to purge grub-efi and installed grub-pc, even then I couldn't boot into ubuntu.(got the same "No Operating System Found") This is the file from boot-repair with this setup - http://pastebin.ubuntu.com/926556/ Then I asked in #ubuntu and they suggested me to create a bios-boot partition, I was not very comfortable with the solution they had suggested but gave it a try anyhow. I re-sized my initial partition and made 4 MB free space in the beginning of the partition and had set the flag bios_grub. Re-installed ubuntu 11.10 this time using amd64 desktop iso image file. Installation went fine as before but finally this time also the system didn't boot, it gave the same - "No Operating System Found" message. In BIOS I have the settings as to use both (Legacy and UEFI) and with UEFI tried first. This is the boot-repair file from my latest setup - http://pastebin.ubuntu.com/926761 Any help/suggestions are appreciated.

    Read the article

  • Step by Step: Remove/Sideline Preinstalled Windows 8, Install Ubuntu

    - by user207562
    I want Ubuntu as my primary/only operating system. Computer came with presumptive Pre-Installed Windows 8.(touch screen) Help. New Dell Inspiron 15r. Cannot install Ubuntu 12.04-03 or LTS. I need/request step by step instructions to remove Windows 8 or sideline Windows 8. (ie) BIOS settings I need to know: What to have; What settings; When to apply; Boot manager settings. UEFI. Etc. This should be easy, but I am mired in the Herpes that is Microsoft. I end up having a presumed dual boot that will not access Ubuntu. (ie) Step 1: Turn on computer. Step 2: F# to change ... to ... at Dell prompt... I want to use Ubuntu as my primary operating system or my only operating system.

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Configuring Multiple Instances of MySQL in Solaris 11

    - by rajeshr
    Recently someone asked me for steps to configure multiple instances of MySQL database in an Operating Platform. Coz of my familiarity with Solaris OE, I prepared some notes on configuring multiple instances of MySQL database on Solaris 11. Maybe it's useful for some: If you want to run Solaris Operating System (or any other OS of your choice) as a virtualized instance in desktop, consider using Virtual Box. To download Solaris Operating System, click here. Once you have your Solaris Operating System (Version 11) up and running and have Internet connectivity to gain access to the Image Packaging System (IPS), please follow the steps as mentioned below to install MySQL and configure multiple instances: 1. Install MySQL Database in Solaris 11 $ sudo pkg install mysql-51 2. Verify if the mysql is installed: $ svcs -a | grep mysql Note: Service FMRI will look similar to the one here: svc:/application/database/mysql:version_51 3. Prepare data file system for MySQL Instance 1 zfs create rpool/mysql zfs create rpool/mysql/data zfs set mountpoint=/mysql/data rpool/mysql/data 4. Prepare data file system for MySQL Instance 2 zfs create rpool/mysql/data2 zfs set mountpoint=/mysql/data rpool/mysql/data2 5. Change the mysql/datadir of the MySQL Service (SMF) to point to /mysql/data $ svcprop mysql:version_51 | grep mysql/data $ svccfg -s mysql:version_51 setprop mysql/data=/mysql/data 6. Create a new instance of MySQL 5.1 (a) Copy the manifest of the default instance to temporary directory: $ sudo cp /lib/svc/manifest/application/database/mysql_51.xml /var/tmp/mysql_51_2.xml (b) Make appropriate modifications on the XML file $ sudo vi /var/tmp/mysql_51_2.xml - Change the "instance name" section to a new value "version_51_2" - Change the value of property name "data" to point to the ZFS file system "/mysql/data2" 7. Import the manifest to the SMF repository: $ sudo svccfg import /var/tmp/mysql_51_2.xml 8. Before starting the service, copy the file /etc/mysql/my.cnf to the data directories /mysql/data & /mysql/data2. $ sudo cp /etc/mysql/my.cnf /mysql/data/ $ sudo cp /etc/mysql/my.cnf /mysql/data2/ 9. Make modifications to the my.cnf in each of the data directories as required: $ sudo vi /mysql/data/my.cnf Under the [client] section port=3306 socket=/tmp/mysql.sock ---- ---- Under the [mysqld] section port=3306 socket=/tmp/mysql.sock datadir=/mysql/data ----- ----- server-id=1 $ sudo vi /mysql/data2/my.cnf Under the [client] section port=3307 socket=/tmp/mysql2.sock ----- ----- Under the [mysqld] section port=3307 socket=/tmp/mysql2.sock datadir=/mysql/data2 ----- ----- server-id=2 10. Make appropriate modification to the startup script of MySQL (managed by SMF) to point to the appropriate my.cnf for each instance: $ sudo vi /lib/svc/method/mysql_51 Note: Search for all occurences of mysqld_safe command and modify it to include the --defaults-file option. An example entry would look as follows: ${MySQLBIN}/mysqld_safe --defaults-file=${MYSQLDATA}/my.cnf --user=mysql --datadir=${MYSQLDATA} --pid=file=${PIDFILE} 11. Start the service: $ sudo svcadm enable mysql:version_51_2 $ sudo svcadm enable mysql:version_51 12. Verify that the two services are running by using: $ svcs mysql 13. Verify the processes: $ ps -ef | grep mysqld 14. Connect to each mysqld instance and verify: $ mysql --defaults-file=/mysql/data/my.cnf -u root -p $ mysql --defaults-file=/mysql/data2/my.cnf -u root -p Some references for Solaris 11 newbies Taking your first steps with Solaris 11 Introducing the basics of Image Packaging System Service Management Facility How To Guide For a detailed list of official educational modules available on Solaris 11, please visit here For MySQL courses from Oracle University access this page.

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • Using the ASP.NET Membership API with SQL Server / SQL Azure: The new &ldquo;System.Web.Providers&rdquo; namespace

    - by Harish Ranganathan
    The Membership API came in .NET 2.0 and was a huge enhancement in building web applications with users, managing roles, permissions etc.,  The Membership API by default uses SQL Express and until Visual Studio 2008, it was available only through the ASP.NET Configuration manager screen (Website – ASP.NET Configuration) or (Project – ASP.NET Configuration) and for every application, one has to manually visit this place to start using the Security and other settings.  Upon doing that the default SQL Express database aspnet.mdf is created to store all the user profiles. Starting Visual Studio 2010 and .NET 4.0, the Default Website template includes the Membership API controls as a part of the page i.e. When you create a “File – New – ASP.NET Web Application” or an “ASP.NET MVC Application”, by default the Login/Register controls are enabled in the MasterPage and they are termed under “ApplicationServices” setting in the web.config file with connection string pointed to the SQL Express database. In fact, when you run the default website and click on “Logon” –> “Register”, and enter the details for registration and click “Register”, that is the time the aspnet.mdf file is created with the tables for Users, Roles, UsersInRoles, Profile etc., Now, this uses the default SQL Express database within the App_Data folder.  If you want to move your Membership information to some other database such as SQL Server, SQL CE or SQL Azure, you need to manually run the aspnet_regsql command and specify the destination database name. This would create all the Tables, Procedures and Views required to handle the Membership information.  Thereafter you can change the connection string for “ApplicationServices” to point to the database where you had run all the scripts. Now, enter “System.Web.Providers” Alpha. This is available as a part of the NuGet package library.  Scott Hanselman has a neat post describing the steps required to get it up and running as well as doing the basic changes  at http://www.hanselman.com/blog/IntroducingSystemWebProvidersASPNETUniversalProvidersForSessionMembershipRolesAndUserProfileOnSQLCompactAndSQLAzure.aspx Pretty much, it covers what the new System.Web.Providers do. One thing I wanted to clarify is that, the new “System.Web.Providers” add a lot of new settings which are also marked as the defaults, in the web.config.  Even now, they use SQL Express as the default database.  But, if you change the connection string for “DefaultConnection” under connectionStrings to point to your SQL Server or SQL Azure, Membership API would now be able to create all the tables, procedures and views at the destination specified (i.e. SQL Server or SQL Azure). In my case, I modified the DefaultConneciton to point to my SQL Azure database.  Next, I hit F5 to run the application.  The default view loads.  I clicked on “LogOn” and then “Register” since I knew there are no tables/users as of then.  One thing to note is that, I had put “NewDB” as the database name in the connection string that points to SQL Azure.  NewDB wasn’t existing and I would assume it would be created before the tables/views/procedures for Membership are created. Once I clicked on the “Register” to register my first username, it took a while and then registered as well as logged in me in.  Also, I went to the SQL Azure Management Portal and verified that there exists “NewDB” which has just been created I could also connect to the SQL Azure database “NewDB” from Management Studio and found that the tables now don’t have the aspnet_ prefix.  The tables were simply Users, Roles, UsersInRoles, Profiles etc., So, with a few clicks and configuration change, I could actually set up the user base for my application on SQL Azure and even make the SessionState, Roles, Profiles being stored in SQL Azure database. The new System.Web.Proivders also required MARS (MultipleActiveResultSets=true) setting since it uses Entity Framework for the DAL operations.  Also, the “Project – ASP.NET Configuration” screen can be used to further create/manage users/roles etc., although the data is stored on the remote database. With that, a long pending request from the community to have the ability to configure and use remote databases for Application users management without having to run the scripts from SQL Express is fulfilled. Cheers !!!

    Read the article

  • Grub 'Read Error' - Only Loads with LiveCD

    - by Ryan Sharp
    Problem After installing Ubuntu to complete my Windows 7/Ubuntu 12.04 dual-boot setup, Grub just wouldn't load at all unless I boot from the LiveCD. Afterwards, everything works completely normal. However, this workaround isn't a solution and I'd like to be able to boot without the aid of a disc. Fdisk -l Using the fdisk -l command, I am given the following: Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x324971d1 Device Boot Start End Blocks Id System /dev/sda1 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 208896 48957439 24374272 7 HPFS/NTFS/exFAT /dev/sda3 * 48959486 124067839 37554177 5 Extended /dev/sda5 48959488 124067839 37554176 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc0ee6a69 Device Boot Start End Blocks Id System /dev/sdb1 1024208894 1953523711 464657409 5 Extended /dev/sdb3 * 2048 1024206847 512102400 7 HPFS/NTFS/exFAT /dev/sdb5 1024208896 1937897471 456844288 83 Linux /dev/sdb6 1937899520 1953523711 7812096 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x292eee23 Device Boot Start End Blocks Id System /dev/sdc1 2048 625141759 312569856 7 HPFS/NTFS/exFAT Bootinfoscript I've used the BootInfoScript, and received the following output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Windows is installed in the MBR of /dev/sdc. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /Boot/BCD sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda3: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb1: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sdb6: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: sdb3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sdb3 starts at sector 200744960. But according to the info from fdisk, sdb3 starts at sector 2048. According to the info in the boot sector, sdb3 has 823461887 sectors, but according to the info from fdisk, it has 1024204799 sectors. Operating System: Boot files: sdc1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/sda2 208,896 48,957,439 48,748,544 7 NTFS / exFAT / HPFS /dev/sda3 * 48,959,486 124,067,839 75,108,354 5 Extended /dev/sda5 48,959,488 124,067,839 75,108,352 83 Linux Drive: sdb _____________________________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 1,024,208,894 1,953,523,711 929,314,818 5 Extended /dev/sdb5 1,024,208,896 1,937,897,471 913,688,576 83 Linux /dev/sdb6 1,937,899,520 1,953,523,711 15,624,192 82 Linux swap / Solaris /dev/sdb3 * 2,048 1,024,206,847 1,024,204,800 7 NTFS / exFAT / HPFS Drive: sdc _____________________________________________________________________ Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdc1 2,048 625,141,759 625,139,712 7 NTFS / exFAT / HPFS "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 A48056DF8056B80E ntfs System Reserved /dev/sda2 A8C6D6A4C6D671D4 ntfs Windows /dev/sda5 fd71c537-3715-44e1-b1fe-07537e22b3dd ext4 /dev/sdb3 6373D03D0A3747A8 ntfs Steam /dev/sdb5 6f5a6eb3-a932-45aa-893e-045b57708270 ext4 /dev/sdb6 469848c8-867a-41b7-b0e1-b813a43c64af swap /dev/sdc1 725D7B961CF34B1B ntfs backup ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda5 / ext4 (rw,noatime,nodiratime,discard,errors=remount-ro) /dev/sdb5 /home ext4 (rw) =========================== sda5/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd set locale_dir=($root)/boot/grub/locale set lang=en_GB insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux /boot/vmlinuz-3.2.0-29-generic root=UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-29-generic } menuentry 'Ubuntu, with Linux 3.2.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd echo 'Loading Linux 3.2.0-29-generic ...' linux /boot/vmlinuz-3.2.0-29-generic root=UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-29-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root A48056DF8056B80E chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root A8C6D6A4C6D671D4 chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda5/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda5 during installation UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd / ext4 noatime,nodiratime,discard,errors=remount-ro 0 1 # /home was on /dev/sdb5 during installation UUID=6f5a6eb3-a932-45aa-893e-045b57708270 /home ext4 defaults 0 2 # swap was on /dev/sdb6 during installation UUID=469848c8-867a-41b7-b0e1-b813a43c64af none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 -------------------------------------------------------------------------------- =================== sda5: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.2.0-29-generic 2 = boot/vmlinuz-3.2.0-29-generic 1 = initrd.img 2 = vmlinuz 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda3 00000000 63 6f 70 69 61 20 65 20 63 6f 6c 61 41 63 65 64 |copia e colaAced| 00000010 65 72 20 61 20 74 6f 64 6f 20 6f 20 74 65 78 74 |er a todo o text| 00000020 6f 20 66 61 6c 61 64 6f 20 75 74 69 6c 69 7a 61 |o falado utiliza| 00000030 6e 64 6f 20 61 20 63 6f 6e 76 65 72 73 c3 a3 6f |ndo a convers..o| 00000040 20 64 65 20 74 65 78 74 6f 20 70 61 72 61 20 76 | de texto para v| 00000050 6f 7a 4d 61 6e 69 70 75 6c 61 72 20 61 73 20 64 |ozManipular as d| 00000060 65 66 69 6e 69 c3 a7 c3 b5 65 73 20 71 75 65 20 |efini....es que | 00000070 63 6f 6e 74 72 6f 6c 61 6d 20 6f 20 61 63 65 73 |controlam o aces| 00000080 73 6f 20 64 65 20 57 65 62 73 69 74 65 73 20 61 |so de Websites a| 00000090 20 63 6f 6f 6b 69 65 73 2c 20 4a 61 76 61 53 63 | cookies, JavaSc| 000000a0 72 69 70 74 20 65 20 70 6c 75 67 2d 69 6e 73 4d |ript e plug-insM| 000000b0 61 6e 69 70 75 6c 61 72 20 61 73 20 64 65 66 69 |anipular as defi| 000000c0 6e 69 c3 a7 c3 b5 65 73 20 72 65 6c 61 63 69 6f |ni....es relacio| 000000d0 6e 61 64 61 73 20 63 6f 6d 20 70 72 69 76 61 63 |nadas com privac| 000000e0 69 64 61 64 65 41 63 65 64 65 72 20 61 6f 73 20 |idadeAceder aos | 000000f0 73 65 75 73 20 70 65 72 69 66 c3 a9 72 69 63 6f |seus perif..rico| 00000100 73 20 55 53 42 55 74 69 6c 69 7a 61 72 20 6f 20 |s USBUtilizar o | 00000110 73 65 75 20 6d 69 63 72 6f 66 6f 6e 65 55 74 69 |seu microfoneUti| 00000120 6c 69 7a 61 72 20 61 20 73 75 61 20 63 c3 a2 6d |lizar a sua c..m| 00000130 61 72 61 55 74 69 6c 69 7a 61 72 20 6f 20 73 65 |araUtilizar o se| 00000140 75 20 6d 69 63 72 6f 66 6f 6e 65 20 65 20 61 20 |u microfone e a | 00000150 63 c3 a2 6d 61 72 61 4e c3 a3 6f 20 66 6f 69 20 |c..maraN..o foi | 00000160 70 6f 73 73 c3 ad 76 65 6c 20 65 6e 63 6f 6e 74 |poss..vel encont| 00000170 72 61 72 20 6f 20 63 61 6d 69 6e 68 6f 20 61 62 |rar o caminho ab| 00000180 73 6f 6c 75 74 6f 20 70 61 72 61 20 6f 20 64 69 |soluto para o di| 00000190 72 65 63 74 c3 b3 72 69 6f 20 61 20 65 6d 70 61 |rect..rio a empa| 000001a0 63 6f 74 61 72 2e 4f 20 64 69 72 65 63 74 c3 b3 |cotar.O direct..| 000001b0 72 69 6f 20 64 65 20 65 6e 74 72 61 64 61 00 fe |rio de entrada..| 000001c0 ff ff 83 fe ff ff 02 00 00 00 00 10 7a 04 00 00 |............z...| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in Begging / Appreciation ;) If anything else is required to solve my problem, please ask. My only hopes are that I can solve this, and that doing so won't require re-installation of Grub due to how complicated the procedures are, or that I would be needed to reinstall the OS', as I have done so about six times already since friday due to several other issues I've encountered. Thank you, and good day. System Ubuntu 12.04 64-bit / Windows 7 SP1 64-bit 64GB SSD as boot/OS drive, 1TB HDD as /Home Swap and Steam drive.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >