Search Results

Search found 43978 results on 1760 pages for 'select case'.

Page 187/1760 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Run Windows in Ubuntu with VMware Player

    - by Matthew Guay
    Are you an enthusiast who loves their Ubuntu Linux experience but still needs to use Windows programs?  Here’s how you can get the full Windows experience on Ubuntu with the free VMware Player. Linux has become increasingly consumer friendly, but still, the wide majority of commercial software is only available for Windows and Macs.  Dual-booting between Windows and Linux has been a popular option for years, but this is a frustrating solution since you have to reboot into the other operating system each time you want to run a specific application.  With virtualization, you’ll never have to make this tradeoff.  VMware Player makes it quick and easy to install any edition of Windows in a virtual machine.  With VMware’s great integration tools, you can copy and paste between your Linux and Windows programs and even run native Windows applications side-by-side with Linux ones. Getting Started Download the latest version of VMware Player for Linux, and select either the 32-bit or 64-bit version, depending on your system.  VMware Player is a free download, but requires registration.  Sign in with your VMware account, or create a new one if you don’t already have one. VMware Player is fairly easy to install on Linux, but you will need to start out the installation from the terminal.  First, enter the following to make sure the installer is marked as executable, substituting version/build_number for the version number on the end of the file you downloaded. chmod +x ./VMware-Player-version/build_number.bundle Then, enter the following to start the install, again substituting your version number: gksudo bash ./VMware-Player-version/build_number.bundle You may have to enter your administrator password to start the installation, and then the VMware Player graphical installer will open.  Choose whether you want to check for product updates and submit usage data to VMware, and then proceed with the install as normal. VMware Player installed in only a few minutes in our tests, and was immediately ready to run, no reboot required.  You can now launch it from your Ubuntu menu: click Applications \ System Tools \ VMware Player. You’ll need to accept the license agreement the first time you run it. Welcome to VMware Player!  Now you can create new virtual machines and run pre-built ones on your Ubuntu desktop. Install Windows in VMware Player on Ubuntu Now that you’ve got VMware setup, it’s time to put it to work.  Click the Create a New Virtual Machine as above to start making a Windows virtual machine. In the dialog that opens, select your installer disk or ISO image file that you want to install Windows from.  In this example, we’re select a Windows 7 ISO.  VMware will automatically detect the operating system on the disk or image.  Click Next to continue. Enter your Windows product key, select the edition of Windows to install, and enter your name and password. You can leave the product key field blank and enter it later.  VMware will ask if you want to continue without a product key, so just click Yes to continue. Now enter a name for your virtual machine and select where you want to save it.  Note: This will take up at least 15Gb of space on your hard drive during the install, so make sure to save it on a drive with sufficient storage space. You can choose how large you want your virtual hard drive to be; the default is 40Gb, but you can choose a different size if you wish.  The entire amount will not be used up on your hard drive initially, but the virtual drive will increase in size up to your maximum as you add files.  Additionally, you can choose if you want the virtual disk stored as a single file or as multiple files.  You will see the best performance by keeping the virtual disk as one file, but the virtual machine will be more portable if it is broken into smaller files, so choose the option that will work best for your needs. Finally, review your settings, and if everything looks good, click Finish to create the virtual machine. VMware will take over now, and install Windows without any further input using its Easy Install.  This is one of VMware’s best features, and is the main reason we find it the easiest desktop virtualization solution to use.   Installing VMware Tools VMware Player doesn’t include the VMware Tools by default; instead, it automatically downloads them for the operating system you’re installing.  Once you’ve downloaded them, it will use those tools anytime you install that OS.  If this is your first Windows virtual machine to install, you may be prompted to download and install them while Windows is installing.  Click Download and Install so your Easy Install will finish successfully. VMware will then download and install the tools.  You may need to enter your administrative password to complete the install. Other than this, you can leave your Windows install unattended; VMware will get everything installed and running on its own. Our test setup took about 30 minutes, and when it was done we were greeted with the Windows desktop ready to use, complete with drivers and the VMware tools.  The only thing missing was the Aero glass feature.  VMware Player is supposed to support the Aero glass effects in virtual machines, and although this works every time when we use VMware Player on Windows, we could not get it to work in Linux.  Other than that, Windows is fully ready to use.  You can copy and paste text, images, or files between Ubuntu and Windows, or simply drag-and-drop files between the two. Unity Mode Using Windows in a window is awkward, and makes your Windows programs feel out of place and hard to use.  This is where Unity mode comes in.  Click Virtual Machine in VMware’s menu, and select Enter Unity. Your Windows desktop will now disappear, and you’ll see a new Windows menu underneath your Ubuntu menu.  This works the same as your Windows Start Menu, and you can open your Windows applications and files directly from it. By default, programs from Windows will have a colored border and a VMware badge in the corner.  You can turn this off from the VMware settings pane.  Click Virtual Machine in VMware’s menu and select Virtual Machine Settings.  Select Unity under the Options tab, and uncheck the Show borders and Show badges boxes if you don’t want them. Unity makes your Windows programs feel at home in Ubuntu.  Here we have Word 2010 and IE8 open beside the Ubuntu Help application.  Notice that the Windows applications show up in the taskbar on the bottom just like the Linux programs.  If you’re using the Compiz graphics effects in Ubuntu, your Windows programs will use them too, including the popular wobbly windows effect. You can switch back to running Windows inside VMware Player’s window by clicking the Exit Unity button in the VMware window. Now, whenever you want to run Windows applications in Linux, you can quickly launch it from VMware Player. Conclusion VMware Player is a great way to run Windows on your Linux computer.  It makes it extremely easy to get Windows installed and running, lets you run your Windows programs seamlessly alongside your Linux ones.  VMware products work great in our experience, and VMware Player on Linux was no exception. If you’re a Windows user and you’d like to run Ubuntu on Windows, check out our article on how to Run Ubuntu in Windows with VMware Player. Link Download VMware Player 3 (Registration required) Download Windows 7 Enterprise 90-day trial Similar Articles Productive Geek Tips Enable Copy and Paste from Ubuntu VMware GuestInstall VMware Tools on Ubuntu Edgy EftRestart the Ubuntu Gnome User Interface QuicklyHow to Add a Program to the Ubuntu Startup List (After Login)How To Run Ubuntu in Windows 7 with VMware Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • SQL SERVER – Information Related to DATETIME and DATETIME2

    - by pinaldave
    I recently received interesting comment on the blog regarding workaround to overcome the precision issue while dealing with DATETIME and DATETIME2. I have written over this subject earlier over here. SQL SERVER – Difference Between GETDATE and SYSDATETIME SQL SERVER – Difference Between DATETIME and DATETIME2 – WITH GETDATE SQL SERVER – Difference Between DATETIME and DATETIME2 SQL Expert Jing Sheng Zhong has left following comment: The issue you found in SQL server new datetime type is related time source function precision. Folks have found the root reason of the problem – when data time values are converted (implicit or explicit) between different data type, which would lose some precision, so the result cannot match each other as thought. Here I would like to gave a work around solution to solve the problem which the developers met. -- Declare and loop DECLARE @Intveral INT, @CurDate DATETIMEOFFSET; CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2, GlobalDate DATETIMEOFFSET) SET @Intveral = 10000 WHILE (@Intveral > 0) BEGIN ----SET @CurDate = SYSDATETIMEOFFSET(); -- higher precision for future use only SET @CurDate = TODATETIMEOFFSET(GETDATE(),DATEDIFF(N,GETUTCDATE(),GETDATE())); -- lower precision to match exited date process INSERT #TimeTable (FirstDate, LastDate, GlobalDate) VALUES (@CurDate, @CurDate, @CurDate) SET @Intveral = @Intveral - 1 END GO -- Distinct Values SELECT COUNT(DISTINCT FirstDate) D_DATETIME, COUNT(DISTINCT LastDate) D_DATETIME2, COUNT(DISTINCT GlobalDate) D_SYSGETDATE FROM #TimeTable GO -- Join SELECT DISTINCT a.FirstDate,b.LastDate, b.GlobalDate, CAST(b.GlobalDate AS DATETIME) GlobalDateASDateTime FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = CAST(b.GlobalDate AS DATETIME) GO -- Select SELECT * FROM #TimeTable GO -- Clean up DROP TABLE #TimeTable GO If you read my blog SQL SERVER – Difference Between DATETIME and DATETIME2 you will notice that I have achieved the same using GETDATE(). Are you using DATETIME2 in your production environment? If yes, I am interested to know the use case. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Precision of SMALLDATETIME – A 1 Minute Precision

    - by pinaldave
    I am myself surprised that I am writing this post today. I am going to present one of the very known facts of SQL Server SMALLDATETIME datatype. Even though this is a very well-known datatype, many a time, I have seen developers getting confused with precision of the SMALLDATETIME datatype. The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. Let us see the following example DECLARE @varSDate AS SMALLDATETIME SET @varSDate = '1900-01-01 12:12:01' SELECT @varSDate C_SDT SET @varSDate = '1900-01-01 12:12:29' SELECT @varSDate C_SDT SET @varSDate = '1900-01-01 12:12:30' SELECT @varSDate C_SDT SET @varSDate = '1900-01-01 12:12:59' SELECT @varSDate C_SDT Following is the result of the above script and note that any value between 0 (zero) and 59 is converted up or down. The part that confuses the developers is the value of the seconds in the display. I think if it is not maintained or recorded, it should not be displayed as well. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Recreating OMS instances in a HA environment when instances on all nodes are lost

    - by rnigam
    Oracle highly recommends deploying EM in a HA environment. The best practices for HA deployments, backup and housekeeping of your Enterprise Manager environment are documented in the Oracle Enterprise Manager Advanced Configuration Guide. It is imperative that there is a good disaster recovery plan in place for your EM deployment. In this post I want to talk about a customer who failed to do the correct planning and housekeeping for EM and landed in a situation where we the all the OMSes were nearly blown away had we not jumped to help. We recently hit an issue at a customer site where we had a two node OMS setup of the Enterprise Manager and a RAC Database being used as the EM repository. An accidental delete of the OMS oracle home left us with a single node deployment. While we were trying to figure out a possible path to recover the first node, the second node was rebooted under a maintenance window. What followed was a complete site outage as the Admin and managed servers would not start on either of the nodes. In my situation there were - No backups of the Oracle Homes from any node - No OMS Configuration snapshots (created using the “emctl exportconfig oms” command) and the instance home was completely lost on node 1 which also had the Admin Server  We did however have: - A copy of the emkey.ora that I found under the OMS_ORACLE_HOME/ of the second node (NOTE: it is a bad practice to have your emkey present under the OMS Oracle home directory on the same server as the OMS. The backup of the emkey should be maintained on some other server. In this case however it was a savior in my situation since there were no backups - The oms oracle home on the second node but missing a number of files and had a number of changes done to the files in the home. There were a number of attempts to start the server by modifying various files based on the Weblogic server logs to have atleast node up and running but all of them failed. Here is how you can recover from this scenario: Follow these steps: STEP 1: Check status of emkey.ora Check whether the emkey exists is present in the EM repository or not. Run the following command: $OMS_ORACLE_HOME/bin/emctl status emkey If the output is something like this below then you are good to go and the key is present in the repository ./emctl status emkey Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. Enter Enterprise Manager Root (SYSMAN) Password : The EMKey is configured properly. Here are the messages that you might see as the emctl status emkey output depending upon whether the EM Admin Server is up and if the key is configured properly: Case1:  AdminServer is up, emkey is proper in CredStore & not in repos. This is same as the output of the command shown above:The EMKey is configured properly Case 2: AdminServer is up, emkey is proper in CredStore & exists in repos:The EMKey is configured properly, but is not secure. Secure the EMKey by running "emctl config emkey -remove_from_repos".Case 3: AdminServer is down or emkey is corrupted in CredStore) & (emkey exists in repos): The EMKey exists in the Management Repository, but is not configured properly or is corrupted in the credential store.Configure the EMKey by running "emctl config emkey -copy_to_credstore".Case 4: (AdminServer is down or emkey is corrupted in CredStore) & (emkey does not exist in repos): The EMKey is not configured properly or is corrupted in the credential store and does not exist in the Management Repository. To correct the problem:1) Get the backed up emkey.ora file.2) Configure the emkey by running "emctl config emkey -copy_to_credstore_from_file". If not the key was not secured properly, we will have to be put in the repository before proceeding. Look at the next step 2 for doing this There may be cases (like mine) where running emctl may give errors like the following: $OMS_ORACLE_HOME/bin/emctl status emkey Exception in thread “Main Thread” java.lang.NoClassDefFoundError: oracle/security/pki/OracleWallet At oracle.sysman.emctl.config.oms.EMKeyCmds.main (EMKeyCmds.java:658) Just move to the next step to put the key back in the repository STEP 2: Put emkey.ora back in the repository Skip this step if your emkey.ora is present in the repository. If not, you need to put the key back in the repository See if you can run the following command (with sample output): $OMS_ORACLE_HOME/bin/emctl config emkey –copy_to_repos Oracle Enterprise Manager 11g Release 1 Grid Control Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved. The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure. After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos". Typically the key is present under $OMS_ORACLE_HOME/sysman/config directory before being removed after the install as a best practice. If you hit any errors while running emctl commands like the one mentioned in step 1, jump to step 3 and we will take care of the emkey.ora in Step 5 STEP 3: Get the port information Check for the existing port information in the emd.properties file under EM_INSTANCE_DIRECTORY (typically gc_inst directory right above the Middleware home where you have deployed em. For eg. /u01/app/oracle/product/gc_inst in case your oms home is /u01/app/oracle/product/Middleware/oms11g) In my case I got the information from the emgc.properties present in the gc_inst on the second node. If you can run emctl you may want to try the following command as well $OMS_ORACLE_HOME/bin/emctl status oms –details Note this information as this will be used in the next step STEP 4: Perform cleanup on Node 1 Note the oracle home of the Weblogic and OMS, get the list of applied patches in the homes (using opatch lsinventory command), take a backup copy of the home just in case we need it and then de-install/remove oracle homes, update inventory and cleanup processes on the first node STEP 5: Perform Software Only Installation of OMS on Node 1 Perform Weblogic 10.3.2 installation exactly under the same location as present in the earlier installation. Perform software only installation of the OMS using the following command. This will not run any configuration assistants and bypass all user interface validations runInstaller –noconfig -validationaswarnings Select the “Additional OMS” option while performing the installation. Provide the same path for OMS and Instance directories like the previous installation Use the port information collected in Step 3 while performing the installation. Once the installation is complete run the allroot.sh script to complete the binary deployment STEP 6: Apply one-off patches At this point you can apply any patches to the OMS Oracle Home previously. You only need to run opatch to install the patch in the home and not required to run the SQLs STEP 7: Copy EM key This step is only required if you were not able to use emctl command to put the emkey back into the EM repository in STEP 2 Copy the emkey.ora file of the old installation you have under $OMS_ORACLE_HOME/sysman/config directory of the newly installed OMS STEP 8: Configure Grid Control Domain Run the following command to configure the EM domain and OMS. Note that you need to use a different GC Domain name than what you used earlier. For example I have used GCDOMAIN11 as the new domain name when my previous domain name was GCDOMAIN $OMS_ORACLE_HOME/bin/omsca new –AS_USERNAME weblogic –EM_DOMAIN_NAME GCDOMAIN11 –NM_USER nodemanager -nostart This command as shown below will prompt for a number of inputs like Admin Server hostname, port, password, etc. Verify if the defaults shown are correct by pressing enter or provide a new value STEP 9: Run Add-ON Configuration Assistant After this step run the following add-on configuration assistant. This was used in my case to configure the virtualization add-on $OMS_ORACLE_HOME/addonca -oui -omsonly -name vt -install gc STEP 10: Start the OMS Now start the OMS using $OMS_ORACLE_HOME/bin/emctl start oms In a multi-node setup like mine you would either have a software load balancer or DNS round robin (using a virtual host name that resolves to one of multiple OMS hostnames) being used for load balancing. Secure the OMS against the SLB or DNS virtual hostname using the following $ OMS_HOME/bin/emctl secure oms -host slb.example.com -secure_port 1159 -slb_port 1159 -slb_console_port 443 STEP 11: Configure the Agent From the $AGENT_ORACLE_HOME/bin run the ./agentca –f At this point you should have your OMS on node 1 fully re-covered. Clean up node 2 and use the normal Additional OMS installation process documented in the official installation guide to add the additional OMS on node 2 Summary It took us nearly a little over two days to completely recover the environment with some other non-EM related issues that hit us along the way as well. In the end a situation like this could have been completely avoided had the proper housekeeping and backup of the Enterprise Manager Deployment been done in the first place. This is going to a topic that we cover in the next post. In the meantime please do refer to the Oracle Enterprise Manager Advanced Configuration Guide for planning your EM installation, backup and housekeeping procedures. This can be found here: http://download.oracle.com/docs/cd/E11857_01/index.htm Thanks This post would not have been possible without Raj Aggarwal, Prasad Chebrolu and Ravikumar Basa who helped to recover the environment and provided all the support we needed

    Read the article

  • Booting the server redis no errors

    - by Tylër
    The redis but usually begins with the following errors: tyler @ tyler-vortex: ~ / pens $. / src / redis-server [3690] Dec 01 10:56:05 # Warning: the specified config file, using the default config. In order to Specify a config file use 'redis-server / path / to / redis.conf' [3690] Dec 01 10:56:05 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max configuration to 992 clients. Others errors founds: tyler@tyler-vortex:~/redis$ sudo ./utils/install_server.sh Welcome to the redis service installer This script will help you easily set up a running redis server Please select the redis port for this instance: [6379] Selecting default: 6379 Please select the redis config file name [/etc/redis/6379.conf] Selected default - /etc/redis/6379.conf Please select the redis log file name [/var/log/redis_6379.log] Selected default - /var/log/redis_6379.log Please select the data directory for this instance [/var/lib/redis/6379] Selected default - /var/lib/redis/6379 Please select the redis executable path [/usr/local/bin/redis-server] cat: ./redis.conf.tpl: Arquivo ou diretório não encontrado cat: ./redis_init_script.tpl: Arquivo ou diretório não encontrado ERROR: Could not write init script to /tmp/6379.conf. Aborting! Furthermore, I would like to know how to configure it not to consume so much RAM. Follow the memory configuration of our website, but the settings of "vm-*" does not exist in the file redis.conf. http://redis.io/topics/virtual-memory You have to create them? * Edit: I installed. After that, I believe that I no longer have access via. / Src / redis-server, because it happens: tyler@tyler-vortex:~$ cd redis/ tyler@tyler-vortex:~/redis$ ./src/redis-server [2616] 01 Dec 22:29:30 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [2616] 01 Dec 22:29:30 # Opening port 6379: bind: Address already in use tyler@tyler-vortex:~/redis$ But there's another detail, the redistribution starts with the system .. redis 127.0.0.1:6379> exit tyler@tyler-vortex:~/redis$ ./src/redis-cli redis 127.0.0.1:6379> exit ... but how can I now see that the communication had before you installed from. sh?

    Read the article

  • Entity Framework v1 &ndash; tips and Tricks Part 3

    - by Rohit Gupta
    General Tips on Entity Framework v1 & Linq to Entities: ToTraceString() If you need to know the underlying SQL that the EF generates for a Linq To Entities query, then use the ToTraceString() method of the ObjectQuery class. (or use LINQPAD) Note that you need to cast the LINQToEntities query to ObjectQuery before calling TotraceString() as follows: 1: string efSQL = ((ObjectQuery)from c in ctx.Contact 2: where c.Address.Any(a => a.CountryRegion == "US") 3: select c.ContactID).ToTraceString(); ================================================================================ MARS or MultipleActiveResultSet When you create a EDM Model (EDMX file) from the database using Visual Studio, it generates a connection string with the same name as the name of the EntityContainer in CSDL. In the ConnectionString so generated it sets the MultipleActiveResultSet attribute to true by default. So if you are running the following query then it streams multiple readers over the same connection: 1: using (BAEntities context = new BAEntities()) 2: { 3: var cons = 4: from con in context.Contacts 5: where con.FirstName == "Jose" 6: select con; 7: foreach (var c in cons) 8: { 9: if (c.AddDate < new System.DateTime(2007, 1, 1)) 10: { 11: c.Addresses.Load(); 12: } 13: } 14: } ================================================================================= Explicitly opening and closing EntityConnection When you call ToList() or foreach on a LINQToEntities query the EF automatically closes the connection after all the records from the query have been consumed. Thus if you need to run many LINQToEntities queries over the same connection then explicitly open and close the connection as follows: 1: using (BAEntities context = new BAEntities()) 2: { 3: context.Connection.Open(); 4: var cons = from con in context.Contacts where con.FirstName == "Jose" 5: select con; 6: var conList = cons.ToList(); 7: var allCustomers = from con in context.Contacts.OfType<Customer>() 8: select con; 9: var allcustList = allCustomers.ToList(); 10: context.Connection.Close(); 11: } ====================================================================== Dispose ObjectContext only if required After you retrieve entities using the ObjectContext and you are not explicitly disposing the ObjectContext then insure that your code does consume all the records from the LinqToEntities query by calling .ToList() or foreach statement, otherwise the the database connection will remain open and will be closed by the garbage collector when it gets to dispose the ObjectContext. Secondly if you are making updates to the entities retrieved using LinqToEntities then insure that you dont inadverdently dispose of the ObjectContext after the entities are retrieved and before calling .SaveChanges() since you need the SAME ObjectContext to keep track of changes made to the Entities (by using ObjectStateEntry objects). So if you do need to explicitly dispose of the ObjectContext do so only after calling SaveChanges() and only if you dont need to change track the entities retrieved any further. ======================================================================= SQL InjectionAttacks under control with EFv1 LinqToEntities and LinqToSQL queries are parameterized before they are sent to the DB hence they are not vulnerable to SQL Injection attacks. EntitySQL may be slightly vulnerable to attacks since it does not use parameterized queries. However since the EntitySQL demands that the query be valid Entity SQL syntax and valid native SQL syntax at the same time. So the only way one can do a SQLInjection Attack is by knowing the SSDL of the EDM Model and be able to write the correct EntitySQL (note one cannot append regular SQL since then the query wont be a valid EntitySQL syntax) and append it to a parameter. ====================================================================== Improving Performance You can convert the EntitySets and AssociationSets in a EDM Model into precompiled Views using the edmgen utility. for e.g. the Customer Entity can be converted into a precompiled view using edmgen and all LinqToEntities query against the contaxt.Customer EntitySet will use the precompiled View instead of the EntitySet itself (the same being true for relationships (EntityReference & EntityCollections of a Entity)). The advantage being that when using precompiled views the performance will be much better. The syntax for generating precompiled views for a existing EF project is : edmgen /mode:ViewGeneration /inssdl:BAModel.ssdl /incsdl:BAModel.csdl /inmsl:BAModel.msl /p:Chap14.csproj Note that this will only generate precompiled views for EntitySets and Associations and not for existing LinqToEntities queries in the project.(for that use CompiledQuery.Compile<>) Secondly if you have a LinqToEntities query that you need to run multiple times, then one should precompile the query using CompiledQuery.Compile method. The CompiledQuery.Compile<> method accepts a lamda expression as a parameter, which denotes the LinqToEntities query  that you need to precompile. The following is a example of a lamda that we can pass into the CompiledQuery.Compile() method 1: Expression<Func<BAEntities, string, IQueryable<Customer>>> expr = (BAEntities ctx1, string loc) => 2: from c in ctx1.Contacts.OfType<Customer>() 3: where c.Reservations.Any(r => r.Trip.Destination.DestinationName == loc) 4: select c; Then we call the Compile Query as follows: 1: var query = CompiledQuery.Compile<BAEntities, string, IQueryable<Customer>>(expr); 2:  3: using (BAEntities ctx = new BAEntities()) 4: { 5: var loc = "Malta"; 6: IQueryable<Customer> custs = query.Invoke(ctx, loc); 7: var custlist = custs.ToList(); 8: foreach (var item in custlist) 9: { 10: Console.WriteLine(item.FullName); 11: } 12: } Note that if you created a ObjectQuery or a Enitity SQL query instead of the LINQToEntities query, you dont need precompilation for e.g. 1: An Example of EntitySQL query : 2: string esql = "SELECT VALUE c from Contacts AS c where c is of(BAGA.Customer) and c.LastName = 'Gupta'"; 3: ObjectQuery<Customer> custs = CreateQuery<Customer>(esql); 1: An Example of ObjectQuery built using ObjectBuilder methods: 2: from c in Contacts.OfType<Customer>().Where("it.LastName == 'Gupta'") 3: select c This is since the Query plan is cached and thus the performance improves a bit, however since the ObjectQuery or EntitySQL query still needs to materialize the results into Entities hence it will take the same amount of performance hit as with LinqToEntities. However note that not ALL EntitySQL based or QueryBuilder based ObjectQuery plans are cached. So if you are in doubt always create a LinqToEntities compiled query and use that instead ============================================================ GetObjectStateEntry Versus GetObjectByKey We can get to the Entity being referenced by the ObjectStateEntry via its Entity property and there are helper methods in the ObjectStateManager (osm.TryGetObjectStateEntry) to get the ObjectStateEntry for a entity (for which we know the EntityKey). Similarly The ObjectContext has helper methods to get an Entity i.e. TryGetObjectByKey(). TryGetObjectByKey() uses GetObjectStateEntry method under the covers to find the object, however One important difference between these 2 methods is that TryGetObjectByKey queries the database if it is unable to find the object in the context, whereas TryGetObjectStateEntry only looks in the context for existing entries. It will not make a trip to the database ============================================================= POCO objects with EFv1: To create POCO objects that can be used with EFv1. We need to implement 3 key interfaces: IEntityWithKey IEntityWithRelationships IEntityWithChangeTracker Implementing IEntityWithKey is not mandatory, but if you dont then we need to explicitly provide values for the EntityKey for various functions (for e.g. the functions needed to implement IEntityWithChangeTracker and IEntityWithRelationships). Implementation of IEntityWithKey involves exposing a property named EntityKey which returns a EntityKey object. Implementation of IEntityWithChangeTracker involves implementing a method named SetChangeTracker since there can be multiple changetrackers (Object Contexts) existing in memory at the same time. 1: public void SetChangeTracker(IEntityChangeTracker changeTracker) 2: { 3: _changeTracker = changeTracker; 4: } Additionally each property in the POCO object needs to notify the changetracker (objContext) that it is updating itself by calling the EntityMemberChanged and EntityMemberChanging methods on the changeTracker. for e.g.: 1: public EntityKey EntityKey 2: { 3: get { return _entityKey; } 4: set 5: { 6: if (_changeTracker != null) 7: { 8: _changeTracker.EntityMemberChanging("EntityKey"); 9: _entityKey = value; 10: _changeTracker.EntityMemberChanged("EntityKey"); 11: } 12: else 13: _entityKey = value; 14: } 15: } 16: ===================== Custom Property ==================================== 17:  18: [EdmScalarPropertyAttribute(IsNullable = false)] 19: public System.DateTime OrderDate 20: { 21: get { return _orderDate; } 22: set 23: { 24: if (_changeTracker != null) 25: { 26: _changeTracker.EntityMemberChanging("OrderDate"); 27: _orderDate = value; 28: _changeTracker.EntityMemberChanged("OrderDate"); 29: } 30: else 31: _orderDate = value; 32: } 33: } Finally you also need to create the EntityState property as follows: 1: public EntityState EntityState 2: { 3: get { return _changeTracker.EntityState; } 4: } The IEntityWithRelationships involves creating a property that returns RelationshipManager object: 1: public RelationshipManager RelationshipManager 2: { 3: get 4: { 5: if (_relManager == null) 6: _relManager = RelationshipManager.Create(this); 7: return _relManager; 8: } 9: } ============================================================ Tip : ProviderManifestToken – change EDMX File to use SQL 2008 instead of SQL 2005 To use with SQL Server 2008, edit the EDMX file (the raw XML) changing the ProviderManifestToken in the SSDL attributes from "2005" to "2008" ============================================================= With EFv1 we cannot use Structs to replace a anonymous Type while doing projections in a LINQ to Entities query. While the same is supported with LINQToSQL, it is not with LinqToEntities. For e.g. the following is not supported with LinqToEntities since only parameterless constructors and initializers are supported in LINQ to Entities. (the same works with LINQToSQL) 1: public struct CompanyInfo 2: { 3: public int ID { get; set; } 4: public string Name { get; set; } 5: } 6: var companies = (from c in dc.Companies 7: where c.CompanyIcon == null 8: select new CompanyInfo { Name = c.CompanyName, ID = c.CompanyId }).ToList(); ;

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Video Card needs additional mounting support

    - by Sean
    We are creating a fairly decent system with crossfire. The issue we are having is apparently the video cards are physically to heavy to be supported purely by the motherboard and case mounts. Whenever we have the case horizontal everything works fine however when we bring the case upright the computer stops working. Relevant Info: Video Cards (2) - Sapphire 5870 (http://www.newegg.com/Product/Product.aspx?Item=N82E16814102856) MotherBoard - Asus M4A79T Deluxe Case - CoolerMaster 922 HAF To me it looks as if an additional support bracket that could hold up the end of the card would be required (included with the viideo cards!!!) Does anyone have any experience with this.

    Read the article

  • SQL SERVER – Take the Quiz for a chance to win a Quadcopter Drone – Brain Teasers

    - by Pinal Dave
    It has been a long time since we ran quiz. So let us get ready for a quiz. The quiz has two parts. You have to get both the parts correct to win Quadcopter with Camera (we will call it drone). We will be giving away a total of 2 Quadcopters. The quiz is extremely easy and I will ship the Drone anywhere in the world where Amazon will ship it. Let us jump directly to the quiz. Please complete all the three questions of the contest.  Contest Part 1: Brain Teasers There are two questions for you in this part of the contest. Question: There are two 7s. How will you write select statement with a single operator that returns single 7? Hint: SELECT 7(Answer)7 Question: Write down the shortest code that produces 1 without using any numbers in the select statement? Hint: SELECT (Answer) Contest Part 2: Download and Activate Rapid SQL Question: Download and Activate Rapid SQL. Hint: You have to download and activate Rapid SQL. If you do not activate Rapid SQL, you will be disqualified for the contest. Why take risk, let us start! That’s it! Just answer above questions in the following comments area, in following format. Remember: Use comments area right below the blog to take participation in the contest Answer before June 5, 2014 midnight GMT. The winner will be announced on June 8. The winner will be selected randomly from all the valid answers. All the valid answers will be kept hidden till June 5, 2014. There will be a total of two winners. The contest is open for any country of the world where Amazon ships products. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Question to You – When to use Function and When to use Stored Procedure

    - by pinaldave
    This week has been very interesting week. I have asked few questions to users and have received remarkable participation on the subject. Q1) SQL SERVER – Puzzle – SELECT * vs SELECT COUNT(*) Q2) SQL SERVER – Puzzle – Statistics are not Updated but are Created Once Keeping the same spirit up, I am asking the third question over here. Q3) When to use User Defined Function and when to use Stored Procedure in your development? Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. When do you use Function and when do you use Stored Procedure? What are Pros and Cons of each of them when used instead of each other? If you are going to answer that ‘To avoid repeating code, you use Function’ - please think harder! Stored procedure can do the same. In SQL Server Denali, even the stored procedure can return the result just like Function in SELECT statement; so if you are going to answer with ‘Function can be used in SELECT, whereas Stored Procedure cannot be used’ - again think harder! (link). Now, what do you say? I will post the answers of all the three questions with due credit next week. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Question, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • Create user in Oracle 11g with same priviledges as in Oracle 10g XE

    - by Álvaro G. Vicario
    I'm a PHP developer (not a DBA) and I've been working with Oracle 10g XE for a while. I'm used to XE's simplified user management: Go to Administration/ Users/ Create user Assign user name and password Roles: leave the default ones (connect and resource) Privileges: click on "Enable all" to select the 11 possible ones Create This way I get a user that has full access to its data and no access to everything else. This is fine since I only need it to develop my app. When the app is to be deployed, the client's DBAs configure the environment. Now I have to create users in a full Oracle 11g server and I'm completely lost. I have a new concept (profiles) and there're like 20 roles and hundreds of privileges in various categories. What steps do I need to complete in Oracle Enterprise Manager in order to obtain a user with the same privileges I used to assign in XE? ==== UPDATE ==== I think I'd better provide a detailed explanation so I make myself clearer. This is how I create a user in 10g XE: Roles: [X] CONNECT [X] RESOURCE [ ] DBA Direct Asignment System Privileges: [ ] CREATE DATABASE LINK [ ] CREATE MATERIALIZED VIEW [ ] CREATE PROCEDURE [ ] CREATE PUBLIC SYNONYM [ ] CREATE ROLE [ ] CREATE SEQUENCE [ ] CREATE SYNONYM [ ] CREATE TABLE [ ] CREATE TRIGGER [ ] CREATE TYPE [ ] CREATE VIEW I click on Enable All and I'm done. This is what I'm asked when doing the same in 11g: Profile: (*) DEFAULT ( ) WKSYS_PROF ( ) MONITORING_PROFILE Roles: CONNECT: [ ] Admin option [X] Default value Edit List: AQ_ADMINISTRATOR_ROLE AQ_USER_ROLE AUTHENTICATEDUSER CSW_USR_ROLE CTXAPP CWM_USER DATAPUMP_EXP_FULL_DATABASE DATAPUMP_IMP_FULL_DATABASE DBA DELETE_CATALOG_ROLE EJBCLIENT EXECUTE_CATALOG_ROLE EXP_FULL_DATABASE GATHER_SYSTEM_STATISTICS GLOBAL_AQ_USER_ROLE HS_ADMIN_ROLE IMP_FULL_DATABASE JAVADEBUGPRIV JAVAIDPRIV JAVASYSPRIV JAVAUSERPRIV JAVA_ADMIN JAVA_DEPLOY JMXSERVER LOGSTDBY_ADMINISTRATOR MGMT_USER OEM_ADVISOR OEM_MONITOR OLAPI_TRACE_USER OLAP_DBA OLAP_USER OLAP_XS_ADMIN ORDADMIN OWB$CLIENT OWB_DESIGNCENTER_VIEW OWB_USER RECOVERY_CATALOG_OWNER RESOURCE SCHEDULER_ADMIN SELECT_CATALOG_ROLE SPATIAL_CSW_ADMIN SPATIAL_WFS_ADMIN WFS_USR_ROLE WKUSER WM_ADMIN_ROLE XDBADMIN XDB_SET_INVOKER XDB_WEBSERVICES XDB_WEBSERVICES_OVER_HTTP XDB_WEBSERVICES_WITH_PUBLIC System Privileges: <Empty> Edit List: ACCESS_ANY_WORKSPACE ADMINISTER ANY SQL TUNING SET ADMINISTER DATABASE TRIGGER ADMINISTER RESOURCE MANAGER ADMINISTER SQL MANAGEMENT OBJECT ADMINISTER SQL TUNING SET ADVISOR ALTER ANY ASSEMBLY ALTER ANY CLUSTER ALTER ANY CUBE ALTER ANY CUBE DIMENSION ALTER ANY DIMENSION ALTER ANY EDITION ALTER ANY EVALUATION CONTEXT ALTER ANY INDEX ALTER ANY INDEXTYPE ALTER ANY LIBRARY ALTER ANY MATERIALIZED VIEW ALTER ANY MINING MODEL ALTER ANY OPERATOR ALTER ANY OUTLINE ALTER ANY PROCEDURE ALTER ANY ROLE ALTER ANY RULE ALTER ANY RULE SET ALTER ANY SEQUENCE ALTER ANY SQL PROFILE ALTER ANY TABLE ALTER ANY TRIGGER ALTER ANY TYPE ALTER DATABASE ALTER PROFILE ALTER RESOURCE COST ALTER ROLLBACK SEGMENT ALTER SESSION ALTER SYSTEM ALTER TABLESPACE ALTER USER ANALYZE ANY ANALYZE ANY DICTIONARY AUDIT ANY AUDIT SYSTEM BACKUP ANY TABLE BECOME USER CHANGE NOTIFICATION COMMENT ANY MINING MODEL COMMENT ANY TABLE CREATE ANY ASSEMBLY CREATE ANY CLUSTER CREATE ANY CONTEXT CREATE ANY CUBE CREATE ANY CUBE BUILD PROCESS CREATE ANY CUBE DIMENSION CREATE ANY DIMENSION CREATE ANY DIRECTORY CREATE ANY EDITION CREATE ANY EVALUATION CONTEXT CREATE ANY INDEX CREATE ANY INDEXTYPE CREATE ANY JOB CREATE ANY LIBRARY CREATE ANY MATERIALIZED VIEW CREATE ANY MEASURE FOLDER CREATE ANY MINING MODEL CREATE ANY OPERATOR CREATE ANY OUTLINE CREATE ANY PROCEDURE CREATE ANY RULE CREATE ANY RULE SET CREATE ANY SEQUENCE CREATE ANY SQL PROFILE CREATE ANY SYNONYM CREATE ANY TABLE CREATE ANY TRIGGER CREATE ANY TYPE CREATE ANY VIEW CREATE ASSEMBLY CREATE CLUSTER CREATE CUBE CREATE CUBE BUILD PROCESS CREATE CUBE DIMENSION CREATE DATABASE LINK CREATE DIMENSION CREATE EVALUATION CONTEXT CREATE EXTERNAL JOB CREATE INDEXTYPE CREATE JOB CREATE LIBRARY CREATE MATERIALIZED VIEW CREATE MEASURE FOLDER CREATE MINING MODEL CREATE OPERATOR CREATE PROCEDURE CREATE PROFILE CREATE PUBLIC DATABASE LINK CREATE PUBLIC SYNONYM CREATE ROLE CREATE ROLLBACK SEGMENT CREATE RULE CREATE RULE SET CREATE SEQUENCE CREATE SESSION CREATE SYNONYM CREATE TABLE CREATE TABLESPACE CREATE TRIGGER CREATE TYPE CREATE USER CREATE VIEW CREATE_ANY_WORKSPACE DEBUG ANY PROCEDURE DEBUG CONNECT SESSION DELETE ANY CUBE DIMENSION DELETE ANY MEASURE FOLDER DELETE ANY TABLE DEQUEUE ANY QUEUE DROP ANY ASSEMBLY DROP ANY CLUSTER DROP ANY CONTEXT DROP ANY CUBE DROP ANY CUBE BUILD PROCESS DROP ANY CUBE DIMENSION DROP ANY DIMENSION DROP ANY DIRECTORY DROP ANY EDITION DROP ANY EVALUATION CONTEXT DROP ANY INDEX DROP ANY INDEXTYPE DROP ANY LIBRARY DROP ANY MATERIALIZED VIEW DROP ANY MEASURE FOLDER DROP ANY MINING MODEL DROP ANY OPERATOR DROP ANY OUTLINE DROP ANY PROCEDURE DROP ANY ROLE DROP ANY RULE DROP ANY RULE SET DROP ANY SEQUENCE DROP ANY SQL PROFILE DROP ANY SYNONYM DROP ANY TABLE DROP ANY TRIGGER DROP ANY TYPE DROP ANY VIEW DROP PROFILE DROP PUBLIC DATABASE LINK DROP PUBLIC SYNONYM DROP ROLLBACK SEGMENT DROP TABLESPACE DROP USER ENQUEUE ANY QUEUE EXECUTE ANY ASSEMBLY EXECUTE ANY CLASS EXECUTE ANY EVALUATION CONTEXT EXECUTE ANY INDEXTYPE EXECUTE ANY LIBRARY EXECUTE ANY OPERATOR EXECUTE ANY PROCEDURE EXECUTE ANY PROGRAM EXECUTE ANY RULE EXECUTE ANY RULE SET EXECUTE ANY TYPE EXECUTE ASSEMBLY EXPORT FULL DATABASE FLASHBACK ANY TABLE FLASHBACK ARCHIVE ADMINISTER FORCE ANY TRANSACTION FORCE TRANSACTION FREEZE_ANY_WORKSPACE GLOBAL QUERY REWRITE GRANT ANY OBJECT PRIVILEGE GRANT ANY PRIVILEGE GRANT ANY ROLE IMPORT FULL DATABASE INSERT ANY CUBE DIMENSION INSERT ANY MEASURE FOLDER INSERT ANY TABLE LOCK ANY TABLE MANAGE ANY FILE GROUP MANAGE ANY QUEUE MANAGE FILE GROUP MANAGE SCHEDULER MANAGE TABLESPACE MERGE ANY VIEW MERGE_ANY_WORKSPACE ON COMMIT REFRESH QUERY REWRITE READ ANY FILE GROUP REMOVE_ANY_WORKSPACE RESTRICTED SESSION RESUMABLE ROLLBACK_ANY_WORKSPACE SELECT ANY CUBE SELECT ANY CUBE DIMENSION SELECT ANY DICTIONARY SELECT ANY MINING MODEL SELECT ANY SEQUENCE SELECT ANY TABLE SELECT ANY TRANSACTION UNDER ANY TABLE UNDER ANY TYPE UNDER ANY VIEW UNLIMITED TABLESPACE UPDATE ANY CUBE UPDATE ANY CUBE BUILD PROCESS UPDATE ANY CUBE DIMENSION UPDATE ANY TABLE Object Privileges: <Empty> Add: Clase Java Clases de Trabajos Cola Columna de Tabla Columna de Vista Espacio de Trabajo Función Instantánea Origen Java Paquete Planificaciones Procedimiento Programas Secuencia Sinónimo Tabla Tipos Trabajos Vista Consumer Group Privileges: <Empty> Default Consumer Group: (*) None Edit List: AUTO_TASK_CONSUMER_GROUP BATCH_GROUP DEFAULT_CONSUMER_GROUP INTERACTIVE_GROUP LOW_GROUP ORA$AUTOTASK_HEALTH_GROUP ORA$AUTOTASK_MEDIUM_GROUP ORA$AUTOTASK_SPACE_GROUP ORA$AUTOTASK_SQL_GROUP ORA$AUTOTASK_STATS_GROUP ORA$AUTOTASK_URGENT_GROUP ORA$DIAGNOSTICS SYS_GROUP And, of course, I wonder what options I should pick.

    Read the article

  • Easy BCD Help: Dual boot Win7 and Ubuntu 11.10 -- "Add new Entry" for Ubuntu

    - by Bradley Peterson
    I first had Ubuntu 11.10 installed on a single partition on my 750GB hard drive. I then partitioned the hard drive to 500GB (for Ubuntu) in ext4 format (what it already was from the clean install of Ubuntu)....and 250GB for Win7 in NFTS format. Then I installed Win7 onto that 250GB partition. Installation went smoothly and I was successfully booted into Win7 and setting everything up. After I was done doing all the stupid updates from Microsof, I thought I was done and I wanted to go back to Ubuntu. This is where the problem starts Of course I reboot and it goes directly to Win7. I research and find that Win7 has overwritten the Ubuntu bootloader, etc etc.. I don't fully understand it. I download EasyBCD 2.1.2 In EasyBCD, I select "Add New Entry" and select "Linux/BSD" and change the type to "GRUB 2" and name it "Ubuntu" Next, I go to "BCD Deployment" and select "Install the Windows Vista/7 bootloader to the MBR" and click "Write MBR" I reboot, select "Ubuntu" and the purple screen comes up, but NOTHING HAPPENS. If I hit Ctrl+Alt+Del, it goes to the Login menu where it acts normal for about 10-15 seconds, then freezes. It does this repeatedly every time. MY QUESTION: What's wrong here? Why can't I load Ubuntu now? Am I going to have to reinstall Ubuntu with Windows, then set up the bootloader with EasyBCD instead of Ubuntu, THEN Win7? Any and all help is appreciated! -Brad

    Read the article

  • Consuming Hello World pagelet in WebCenter Spaces

    - by astemkov
    Introduction The goal of this exercise is to show you how can you use Hello World pagelet that you just created from your web space. Assumptions Let's assume the following: Pagelet Producer is running on http://pageletserver.company.com:8889/pagelets/ WebCenter is running on http://webcenter.company.com:8888/webcenter/ You created Hello_World pagelet as described here. For our exercise we will need a space created. So let's login into WebCenter Portal and create a space called "myspace" using "Portal Site" template: Registering Pagelet Producer with WebCenter portal In order to use our newly created pagelet from WebCenter Spaces, we first need to register Pagelet Producer: Click "Administraion" link on WebCenter toolbar Open the "Configuration" tab Click on "Services" link on the upper-left corner of the page Click on "Portlet Producers" link on the right hand pane of the screen Click on "Register" button Select "Pagelet Producer" radio button and type Producer Name = "MyPageletProducer" Server URL = http://pageletserver.company.com:8889/pagelets/ Click "Test" button If everything is succesful you will see the following screen: Now click "OK'. Pagelet producer is registered: Inserting Hello World pagelet to WebCenter Space Now let's insert Hello World pagelet into "myspace" page: Let's go back to "myspace", click on the icon in a upper-right corner of the page and select "Edit Page" Click on one of the "Add Content" buttons: Select "Mash-Ups": Select "Pagelet Producers: You will see the MyPageletProducer that we just registered: Click on it. You will see the library "MyLib" that contains our "Hello_World" pagelet. Click on "MyLib" and you will see "Hello_World" pagelet. Click on "Add" button, and then "Close" button. Click "Save" button, and then "Close". Now we see that our "Hello World" pagelet is inserted into "myspace" page:

    Read the article

  • SQL SERVER – Get Date and Time From Current DateTime – SQL in Sixty Seconds #025 – Video

    - by pinaldave
    This is 25th video of series SQL in Sixty Seconds we started a few months ago. Even though this is 25th video it seems like we have just started this few days ago. The best part of this SQL in Sixty Seconds is that one can learn something new in less than sixty seconds. There are many concepts which are not new for many but just we all have 60 seconds to refresh our memories. In this video I have touched a very simple question which I receive very frequently on this blog. Q1) How to get current date time? Q2) How to get Only Date from datetime? Q3) How to get Only Time from datetime? I have created a sixty second video on this subject and hopefully this will help many beginners in the SQL Server field. This sixty second video describes the same. Here is a similar script which I have used in the video. SELECT GETDATE() GO -- SQL Server 2000/2005 SELECT CONVERT(VARCHAR(8),GETDATE(),108) AS HourMinuteSecond, CONVERT(VARCHAR(8),GETDATE(),101) AS DateOnly; GO -- SQL Server 2008 Onwards SELECT CONVERT(TIME,GETDATE()) AS HourMinuteSeconds; SELECT CONVERT(DATE,GETDATE()) AS DateOnly; GO Related Tips in SQL in Sixty Seconds: Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime Get Current System Date Time Get Date Time in Any Format – UDF – User Defined Functions Date and Time Functions – EOMONTH() – A Quick Introduction DATE and TIME in SQL Server 2008 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Image Credit: Movie Gone in 60 Seconds Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL Server "Long running transaction" performance counter: why no workee?

    - by Sleepless
    Please explain to me the following observation: I have the following piece of T-SQL code that I run from SSMS: BEGIN TRAN SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) SELECT COUNT (*) FROM m WHERE m.[x] = 123456 or m.[y] IN (SELECT f.x FROM f) COMMIT TRAN The query takes about twenty seconds to run. I have no other user queries running on the server. Under these circumstances, I would expect the performance counter "MSSQL$SQLInstanceName:Transactions\Longest Transaction Running Time" to rise constantly up to a value of 20 and then drop rapidly. Instead, it rises to around 12 within two seconds and then oscillates between 12 and 14 for the duration of the query after which it drops again. According to the MS docs, the counter measures "The length of time (in seconds) since the start of the transaction that has been active longer than any other current transaction." But apparently, it doesn't. What gives?

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • Configuring Request-Reply in JMSAdapter

    - by [email protected]
    Request-Reply is a new feature in 11g JMSAdapter that helps you achieve the following:Allows you to combine Request and Reply in a single step. In the prior releases of the Oracle SOA Suite, you would require to configure two distinct adapters. Performs automatic correlation without you needing to configure BPEL "correlation sets". This would work seamlessly in Mediator and BPMN as well.In order to configure the JMSAdapter Request-Reply, please follow these steps:1) Drag and drop a JMSAdapter onto the "External References" swim lane in your composite editor. 2) Enter default values for the first few screens in the JMS Adapter wizard till you hit the screen where the wizard prompts you to enter the operation name. Select "Request-Reply" as the "Operation Type" and Asynchronous as "Operation Name".3) Select the Request and Reply queues in the following screens of the wizard. The message will be en-queued in the "Request" queue and the reply will be returned in the "Reply" queue. The reason I have used such a selector is that the back-end system that reads from the request queue and generates the response in the response queue actually generates more than one response and hence I must use a filter to exclude the unwanted responses.4) Select the message schema for request as well as response. 5) Add an <invoke> activity in BPEL corresponding to the JMS Adapter partner link. Please note that I am setting an additional header as my third-party application requires this.6) Add a <receive> activity just after the <invoke> and select the "Reply" operation. Please make sure that the "Create Instance" option is unchecked.Your completed BPEL process will something like this:

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Arch linux selecting packages

    - by allenskd
    Hello, I'm currently having problems with my arch linux installation, forgot to select a few packages but in the guide they never mention which key do I need to press to select/deselect the packages in the installation screen. I used to select them but it's been a long time, I don't remember much anymore =/ Or, if there is a way to make pacman recognize the cd as a container of packages that'd be great.

    Read the article

  • variables used in inner queries

    - by wcpro
    im trying to build a query that has something like this select id, (select top 1 create_date from table2 where table1id = t1.id and status = 'success') [last_success_date], (select count(*) from table2 where table1id = t1.id and create_date > [last_success_date]) [failures_since_success] from table1 t1 as you can see the [last_Success_Date] is not within the scope of the second query, and i was wondering how i could access that value in other queries without having to rerun it?

    Read the article

  • Creating a new project for Team Foundation Server Basic.

    - by Enrique Lima
    We have installed and configured TFS, we have connected to it using Visual Studio.  Now it is time to get a project created. From Team Explorer, we will right click on the servername\Collection item in the tree to select New Team Project. Once selected, this will open the New Team Project dialog.  Provide a name, then click Next.   The next step is to select a Project Template.  By default you will have 2 available (but there are many downloadable options).  It is important to understand what the templates bring and what options we will live with in the Lifecycle Management option we select. Once selected, click Next. Now we are at the point to specify where our code will be collected, Source Code settings part of the wizard.  Since we are starting new, we will select an empty folder. Click Next. Next we get a Summary view of the options selected. Click Finish. Once the template is downloaded, applied and our choices processed, we have completed the project creation.   This should be our final product …

    Read the article

  • Change Data Capture

    - by Ricardo Peres
    There's an hidden gem in SQL Server 2008: Change Data Capture (CDC). Using CDC we get full audit capabilities with absolutely no implementation code: we can see all changes made to a specific table, including the old and new values! You can only use CDC in SQL Server 2008 Standard or Enterprise, Express edition is not supported. Here are the steps you need to take, just remember SQL Agent must be running: use SomeDatabase; -- first create a table CREATE TABLE Author ( ID INT NOT NULL PRIMARY KEY IDENTITY(1, 1), Name NVARCHAR(20) NOT NULL, EMail NVARCHAR(50) NOT NULL, Birthday DATE NOT NULL ) -- enable CDC at the DB level EXEC sys.sp_cdc_enable_db -- check CDC is enabled for the current DB SELECT name, is_cdc_enabled FROM sys.databases WHERE name = 'SomeDatabase' -- enable CDC for table Author, all columns exec sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'Author', @role_name = null -- insert values into table Author insert into Author (Name, EMail, Birthday, Username) values ('Bla', 'bla@bla', 1990-10-10, 'bla') -- check CDC data for table Author -- __$operation: 1 = DELETE, 2 = INSERT, 3 = BEFORE UPDATE 4 = AFTER UPDATE -- __$start_lsn: operation timestamp select * from cdc.dbo_author_CT -- update table Author update Author set EMail = '[email protected]' where Name = 'Bla' -- check CDC data for table Author select * from cdc.dbo_author_CT -- delete from table Author delete from Author -- check CDC data for table Author select * from cdc.dbo_author_CT -- disable CDC for table Author -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'Author', @capture_instance = 'dbo_Author' -- disable CDC for the entire DB -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_db SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.all();

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >