Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 208/307 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • The Kids Are Alright. With Facebook and SMS. But Not Twitter

    - by ultan o'broin
    I delivered a lecture to business and technology freshmen (late teens, I reckon) in Trinity College Dublin recently. I spoke about user experience in enterprise applications, trends that UX pros need to be aware of such as social media, community support, mobile and tablet platforms and a bunch of nuances around those areas (data and device security, privacy, reputation, branding, and so on). It was all fairly high level stuff given the audience, and I included lots of colorful screenshots. Irish-related examples helped to get the message across. During the lecture I did a quick poll. “How many students here use Twitter?” Answer: None. “How many use Facebook?” All (pretty much). So what do these guys like to use instead of Twitter? Easy - text messaging (or SMS if you like). They all had phones. Perhaps I should not have been so surprised about Twitter, but it’s always great to have research validated by some guerilla UX research on the street. There’s already quite a bit of research about teen uptake (or lack) of Twitter, telling us young adults don’t tweet. Twitter is seen as something for er, older people. Affordable devices and data plans that allow students to text really quickly are also popular (BlackBerry, for example). Younger people just luuurve to text each other. A lot.  Facebook versus Twitter for younger folks? Well, we know the story. No contest. I would love to engage more with students like these. I’ll plan for it. It will also be interesting to see if Twitter becomes more important to them over time. There were a few other interesting observations about the lack of uptake of Foursquare, Gowalla and mobile apps like that. I  don’t think there’s a huge uptake in these kind of apps in Ireland anyway, but maybe students have different priorities anyway?   I’ll return to that another day. Technorati Tags: Gowalla,FourSquare,Twitter,UX,user experience,user assistance,Trinity College Dublin

    Read the article

  • Oracle Releases New Mainframe Re-Hosting in Oracle Tuxedo 11g

    - by Jason Williamson
    I'm excited to say that we've released our next generation of Re-hosting in 11g. In fact I'm doing some hands-on labs now for our Systems Integrators in Italy in a couple of weeks and targeting Latin America next month. If you are an SI, or Rehosting firm and are looking to become an Oracle Partner or get a better understanding of Tuxedo and how to use the workbench for rehosting...drop me a line. Oracle Tuxedo Application Runtime for CICS and Batch 11g provides a CICS API emulation and Batch environment that exploits the full range of Oracle Tuxedo's capabilities. Re-hosted applications run in a multi-node, grid environment with centralized production control. Also, enterprise integration of CICS application services benefits from an open and SOA-enabled framework. Key features include: CICS Application Runtime: Can run IBM CICS applications unchanged in an application grid, which enables the distribution of large workloads across multiple processors and nodes. This simplifies CICS administration and can scale to over 100,000 users and over 50,000 transactions per second. 3270 Terminal Server: Protects business users from change through support for tn3270 terminal emulation. Distributed CICS Resource Management: Simplifies deployment and administration by allowing customers to run CICS regions in a distributed configuration. Batch Application Runtime: Provides robust IBM JES-like job management that enables local or remote job submissions. In addition, distributed batch initiators can enable parallelization of jobs and support fail-over, shortening the batch window and helping to meet stringent SLAs. Batch Execution Environment: Helps to run IBM batch unchanged and also supports JCL functionality and all common batch utilities. Oracle Tuxedo Application Rehosting Workbench 11g provides a set of automated migration tools integrated around a central repository. The tools provide high precision which results in very low error rates and the ability to handle large applications. This enables less expensive, low-risk migration projects. Key capabilities include: Workbench Repository and Cataloguer: Ensures integrity of the migrated application assets through full dependency checking. The Cataloguer generates and maintains all relevant meta-data on source and target components. File Migrator: Supports reliable migration of datasets and flat files to an ISAM or Oracle Database 11g. This is done through the automated migration utilities for data unloading, reloading and validation. It also generates logical access functions to shield developers from data repository changes. DB2 Migrator: Similarly, this tool automates the migration of DB2 schema and data to Oracle Database 11g. COBOL Migrator: Supports migration of IBM mainframe COBOL assets (OLTP and Batch) to open systems. Adapts programs for compiler dialects and data access variations. JCL Migrator: Supports migration of IBM JCL jobs to a Tuxedo ART environment, maintaining the flow and characteristics of batch jobs.

    Read the article

  • SQLAuthority News – SQL Server 2012 Upgrade Technical Guide – A Comprehensive Whitepaper – (454 pages – 9 MB)

    - by pinaldave
    Microsoft has just released SQL Server 2012 Upgrade Technical Guide. This guide is very comprehensive and covers the subject of upgrade in-depth. This is indeed a helpful detailed white paper. Even writing a summary of this white paper would take over 100 pages. This further proves that SQL Server 2012 is quite an important release from Microsoft. This white paper discusses how to upgrade from SQL Server 2008/R2 to SQL Server 2012. I love how it starts with the most interesting and basic discussion of upgrade strategies: 1) In-place upgrades, 2) Side by side upgrade, 3) One-server, and 4) Two-server. This whitepaper is not just pure theory but is also an excellent source for some tips and tricks. Here is an example of a good tip from the paper: “If you want to upgrade just one database from a legacy instance of SQL Server and not upgrade the other databases on the server, use the side-by-side upgrade method instead of the in-place method.” There are so many trivia, tips and tricks that make creating the list seems humanly impossible given a short period of time. My friend Vinod Kumar, an SQL Server expert, wrote a very interesting article on SQL Server 2012 Upgrade before. In that article, Vinod addressed the most interesting and practical questions related to upgrades. He started with the fundamentals of how to start backup before upgrade and ended with fail-safe strategies after the upgrade is over. He covered end-to-end concepts in his blog posts in simple words in extremely precise statements. A successful upgrade uses a cycle of: planning, document process, testing, refine process, testing, planning upgrade window, execution, verifying of upgrade and opening for business. If you are at Vinod’s blog post, I suggest you go all the way down and collect the gold mine of most important links. I have bookmarked the blog by blogging about it and I suggest that you bookmark it as well with the way you prefer. Vinod Kumar’s blog post on SQL Server 2012 Upgrade Technical Guide SQL Server 2012 Upgrade Technical Guide is a detailed resource that’s also available online for free. Each chapter was carefully crafted and explained in detail. Here is a quick list of the chapters included in the whitepaper. Before downloading the guide, beware of its size of 9 MB and 454 pages. Here’s the list of chapters: Chapter 1: Upgrade Planning and Deployment Chapter 2: Management Tools Chapter 3: Relational Databases Chapter 4: High Availability Chapter 5: Database Security Chapter 6: Full-Text Search Chapter 7: Service Broker Chapter 8: SQL Server Express Chapter 9: SQL Server Data Tools Chapter 10: Transact-SQL Queries Chapter 11: Spatial Data Chapter 12: XML and XQuery Chapter 13: CLR Chapter 14: SQL Server Management Objects Chapter 15: Business Intelligence Tools Chapter 16: Analysis Services Chapter 17: Integration Services Chapter 18: Reporting Services Chapter 19: Data Mining Chapter 20: Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: SQL Server 2012: Upgrade Planning Checklist Download SQL Server 2012 Upgrade Technical Guide [454 pages and 9 MB] Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Not Dead, Just Busy

    - by MOSSLover
    So I didn’t die in a freak smelting accident yet, but I have been dealing with a lot of different things.  I had to take a bit of a break to deal with the cat death issue.  I am not fully recovered, because well it just happened a few months ago.  It kind of sucked.  Plus the apartment feels a lot bigger. Then you have the whole New York Comic Con thing where I had to plan some cosplay costumes.  I have been trying to find time to hang out with friends and have a social life.  That plus I built an entire presentation for iOS development for New York Code Camp.  I am also planning a couple MS Community dinners (namely one a week from Tuesday) plus a give camp.  I am also planning a vacation around SPS UK plus I will be at SPC.  Life is just incredibly hectic and when you factor in dating to the mix it’s gotten insane to the point where some day I just have to go dark.  Hence the lack of blogging.  I am just trying to keep up with everything and everyone without losing myself. If you guys will be at SPC or SPS UK I will be at both places this year.  Stop by the Planet Technologies booth and see me or I’ll be around somewhere.  I am really sorry if I don’t remember you from an event or if you are someone following me on twitter.  I am trying to get better at the mnemonic memory devices, but I think things broke down around the 47th event I attended or spoke at or something to that nature.  If anyone wants to talk to Cathy, Lori, or I about Women in SharePoint definitely find us at the event.  Anyway good night and good luck guys.  I promise to check back at least once before the year ends.  In the meantime twitter stalking is always possible.  Sometimes I even respond back. Technorati Tags: SPC,SPS UK,NYCC,NYC Code Camp,MOSSLover

    Read the article

  • Windows CE: Changing Static IP Address

    - by Bruce Eitman
    A customer contacted me recently and asked me how to change a static IP address at runtime.  Of course this is not something that I know how to do, but with a little bit of research I figure out how to do it. It turns out that the challenge is to request that the adapter update itself with the new IP Address.  Otherwise, the change in IP address is a matter of changing the address in the registry for the adapter.   The registry entry is something like: [HKEY_LOCAL_MACHINE\Comm\LAN90001\Parms\TcpIp]    "EnableDHCP"=dword:0    "IpAddress"="192.168.0.100"     "DefaultGateway"="192.168.0.1"    "Subnetmask"="255.255.255.0" Where LAN90001 would be replace with your adapter name.  I have written quite a few articles about how to modify the registry, including a registry editor that you could use. Requesting that the adapter update itself is a matter of getting a handle to the NDIS driver, and then asking it to refresh the adapter.  The code is: #include <windows.h> #include "winioctl.h" #include "ntddndis.h"   void RebindAdapter( TCHAR *adaptername ) {       HANDLE hNdis;       BOOL fResult = FALSE;       int count;         // Make this function easier to use - hide the need to have two null characters.       int length = wcslen(adaptername);       int AdapterSize = (length + 2) * sizeof( TCHAR );       TCHAR *Adapter = malloc(AdapterSize);       wcscpy( Adapter, adaptername );       Adapter[ length ] = '\0';       Adapter[ length +1 ] = '\0';           hNdis = CreateFile(DD_NDIS_DEVICE_NAME,                   GENERIC_READ | GENERIC_WRITE,                   FILE_SHARE_READ | FILE_SHARE_WRITE,                   NULL,                   OPEN_ALWAYS,                   0,                   NULL);         if (INVALID_HANDLE_VALUE != hNdis)       {             fResult = DeviceIoControl(hNdis,                         IOCTL_NDIS_REBIND_ADAPTER,                         Adapter,                         AdapterSize,                         NULL,                         0,                         &count,                         NULL);             if( !fResult )             {                   RETAILMSG( 1, (TEXT("DeviceIoControl failed %d\n"), GetLastError() ));             }             CloseHandle(hNdis);       }       else       {             RETAILMSG( 1, (TEXT("Failed to open NDIS Handle\n")));       }   }       int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPWSTR    lpCmdLine, int       nCmdShow) {     RebindAdapter( TEXT("LAN90001") );     return 0; }   If you don’t want to write any code, but instead plan to use a registry editor to change the IP Address, then there is a command line utility to do the same thing.  NDISConfig.exe can be used: Ndisconfig adapter rebind LAN90001    Copyright © 2012 – Bruce Eitman All Rights Reserved

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 25 (sys.dm_db_missing_index_details)

    - by Tamarick Hill
    The sys.dm_db_missing_index_details Dynamic Management View is used to return information about missing indexes on your SQL Server instances. These indexes are ones that the optimizer has identified as indexes it would like to use but did not have. You may also see these same indexes indicated in other tools such as query execution plans or the Database tuning advisor. Let’s execute this DMV so we can review the information it provides us. I do not have any missing index information for my AdventureWorks2012 database, but for the purposes of illustrating the result set of this DMV, I will present the results from my msdb database. SELECT * FROM sys.dm_db_missing_index_details The first column presented is the index_handle which uniquely identifies a particular missing index. The next two columns represent the database_id and the object_id for the particular table in question. Next is the ‘equality_columns’ column which gives you a list of columns (comma separated) that would be beneficial to the optimizer for equality operations. By equality operation I mean for any queries that would use a filter or join condition such as WHERE A = B. The next column, ‘inequality_columns’, gives you a comma separated list of columns that would be beneficial to the optimizer for inequality operations. An inequality operation is anything other than A = B. For example, “WHERE A != B”, “WHERE A > B”, “WHERE A < B”, and “WHERE A <> B” would all qualify as inequality. Next is the ‘included_columns’ column which list all columns that would be beneficial to the optimizer for purposes of providing a covering index and preventing key/bookmark lookups. Lastly is the ‘statement’ column which lists the name of the table where the index is missing. This DMV can help you identify potential indexes that could be added to improve the performance of your system. However, I will advise you not to just take the output of this DMV and create an index for everything you see. Everything listed here should be analyzed and then tested on a Development or Test system before implementing into a Production environment. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms345434.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Unification of TPL TaskScheduler and RX IScheduler

    - by JoshReuben
    using System; using System.Collections.Generic; using System.Reactive.Concurrency; using System.Security; using System.Threading; using System.Threading.Tasks; using System.Windows.Threading; namespace TPLRXSchedulerIntegration { public class MyScheduler :TaskScheduler, IScheduler     { private readonly Dispatcher _dispatcher; private readonly DispatcherScheduler _rxDispatcherScheduler; //private readonly TaskScheduler _tplDispatcherScheduler; private readonly SynchronizationContext _synchronizationContext; public MyScheduler(Dispatcher dispatcher)         {             _dispatcher = dispatcher;             _rxDispatcherScheduler = new DispatcherScheduler(dispatcher); //_tplDispatcherScheduler = FromCurrentSynchronizationContext();             _synchronizationContext = SynchronizationContext.Current;         }         #region RX public DateTimeOffset Now         { get { return _rxDispatcherScheduler.Now; }         } public IDisposable Schedule<TState>(TState state, DateTimeOffset dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, TimeSpan dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, action);         }         #endregion         #region TPL /// Simply posts the tasks to be executed on the associated SynchronizationContext         [SecurityCritical] protected override void QueueTask(Task task)         {             _dispatcher.BeginInvoke((Action)(() => TryExecuteTask(task))); //TryExecuteTaskInline(task,false); //task.Start(_tplDispatcherScheduler); //m_synchronizationContext.Post(s_postCallback, (object)task);         } /// The task will be executed inline only if the call happens within the associated SynchronizationContext         [SecurityCritical] protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)         { if (SynchronizationContext.Current != _synchronizationContext)             { SynchronizationContext.SetSynchronizationContext(_synchronizationContext);             } return TryExecuteTask(task);         } // not implemented         [SecurityCritical] protected override IEnumerable<Task> GetScheduledTasks()         { return null;         } /// Implementes the MaximumConcurrencyLevel property for this scheduler class. /// By default it returns 1, because a <see cref="T:System.Threading.SynchronizationContext"/> based /// scheduler only supports execution on a single thread. public override Int32 MaximumConcurrencyLevel         { get             { return 1;             }         } //// preallocated SendOrPostCallback delegate //private static SendOrPostCallback s_postCallback = new SendOrPostCallback(PostCallback); //// this is where the actual task invocation occures //private static void PostCallback(object obj) //{ //    Task task = (Task) obj; //    // calling ExecuteEntry with double execute check enabled because a user implemented SynchronizationContext could be buggy //    task.ExecuteEntry(true); //}         #endregion     } }     What Design Pattern did I use here?

    Read the article

  • SQL SERVER – Three Puzzling Questions – Need Your Answer

    - by pinaldave
    Last week I had asked three questions on my blog. I got very good response to the questions. I am planning to write summary post for each of three questions next week. Before I write summary post and give credit to all the valid answers. I was wondering if I can bring to notice of all of you this week. Why SELECT * throws an error but SELECT COUNT(*) does not This is indeed very interesting question as not quite many realize that this kind of behavior SQL Server demonstrates out of the box. Once you run both the code and read the explanation it totally makes sense why SQL Server is behaving how it is behaving. Also there is connect item is associated with it. Also read the very first comment by Rob Farley it also shares very interesting detail. Statistics are not Updated but are Created Once This puzzle has multiple right answer. I am glad to see many of the correct answer as a comment to this blog post. Statistics are very important and it really helps SQL Server Engine to come up with optimal execution plan. DBA quite often ignore statistics thinking it does not need to be updated, as they are automatically maintained if proper database setting is configured (auto update and auto create). Well, in this question, we have scenario even though auto create and auto update statistics are ON, statistics is not updated. There are multiple solutions but what will be your solution in this case? When to use Function and When to use Stored Procedure This question is rather open ended question – there is no right or wrong answer. Everybody developer has always used functions and stored procedures. Here is the chance to justify when to use Stored Procedure and when to use Functions. I want to acknowledge that they can be used interchangeably but there are few reasons when one should not do that. There are few reasons when one is better than other. Let us discuss this here. Your opinion matters. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Move Window Buttons Back to the Right in Ubuntu 10.04

    - by Trevor Bekolay
    One of the more controversial changes in the Ubuntu 10.04 beta is the Mac OS-inspired change to have window buttons on the left side. We’ll show you how to move the buttons back to the right. Before While the change may or may not persist through to the April 29 release of Ubuntu 10.04, in the beta version the maximize, minimize, and close buttons appear in the top left of a window. How to move the window buttons The window button locations are dictated by a configuration file. We’ll use the graphical program gconf-editor to change this configuration file. Press Alt+F2 to bring up the Run Application dialog box, enter “gconf-editor” in the text field, and click on Run. The Configuration Editor should pop up. The key that we want to edit is in apps/metacity/general. Click on the + button next to the “apps” folder, then beside “metacity” in the list of folders expanded for apps, and then click on the “general” folder. The button layout can be changed by changing the “button_layout” key. Double-click button_layout to edit it. Change the text in the Value text field to: menu:maximize,minimize,close Click OK and the change will occur immediately, changing the location of the window buttons in the Configuration Editor. Note that this ordering of the window buttons is slightly different than the typical order; in previous versions of Ubuntu and in Windows, the minimize button is to the left of the maximize button. You can change the button_layout string to reflect that ordering, but using the default Ubuntu 10.04 theme, it looks a bit strange. If you plan to change the theme, or even just the graphics used for the window buttons, then this ordering may be more natural to you. After After this change, all of your windows will have the maximize, minimize, and close buttons on the right. What do you think of Ubuntu 10.04’s visual change? Let us know in the comments! Similar Articles Productive Geek Tips Move a Window Without Clicking the Titlebar in UbuntuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Keep the Display From Turning Off on UbuntuPut Close/Maximize/Minimize Buttons on the Left in UbuntuAllow Remote Control To Your Desktop On Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users

    Read the article

  • Computer / Software Engineering vs Other Engineering Disciplines [closed]

    - by Mohammad Yaseen
    Since this was a rather specific question, I have tried my best to present this question in a format which fits the style of this site. Please comment if it can be improved further. I have to choose the Engineering discipline on 6th Nov. My interest is in Robotics, hardware-level programming, Artificial Intelligence and back-end programming. I am currently working as a freelance developer using mainly PHP and occasionally working with GWT.I am somewhat familiar with C# and Python too. I am not super good at programming but I do like it. I am thinking to choose Computer and Information Systems Engineering as this is what I love but all the eggheads of my city are going to Mechanical Engineering and when I ask one of them Why are you choosing this? They say It's my interest and for job and the money. Basically I am confused between CIS and Mechanical Engineering, specifically the job market for both. Since this is a programmers' site I think following questions will be relevant . I am asking this because I want to take advice from professionals in this field before diving too deep . Are you happy with your job / work and pay. Are you satisfied with the work environment and career growth Do you feel OK (or great?) about the near and/or distant future of your industry. Why should a person choose Computer if he has other choices i.e what this industry has to offer in particular which other fields of work don't This industry is subject to rapid changes and you have to learn continuously throughout your entire career. Does this learning and constant hard work pay off ? In my country there is no hardware manufacturing. So most of CIS graduates (like Software Engineers) work in Software Houses. What is the scenario in your country. Is a degree titled 'Software' necessary or companies will take Computer Engineers too if they have relevant experience. I am asking this because I plan to move abroad for work. This is going to be something which I'll do for the rest of my life so I am a bit confused about the right choice. You can view the course outline for both programmes below. Computer and Information Systems Engineering. Mechanical Engineering

    Read the article

  • ROA on top of SOA

    - by Vaibhav Pujari
    I already have a stable Service Oriented Architecture for my application which exposes services as API calls. (the verbs) Now, I need to build a Resource Oriented Architecture to expose a RESTful API to interact with the application objects. (the nouns) What are the best practices to reuse the existing services: - without any persistence inside my new code. - without putting unnecessary logic into the REST layer i.e. it should ideally just leverage the services provided by SOA API. I want this layer to be as thin as possible. - without modifying the existing SOA API - allow easy extension of the REST API i.e. it should be easy to add more resources without changing the (yet to be written) core code. (I want to make resource names and their associated actions configurable so more contributors can easily add resources without a need to understand my module) Any advices/suggestions how to achieve this? Edit: Adding more info My Stack: My existing stacks is in Java. But since I plan to just use the services, I don't think that should affect the design of new REST code. I am planning to implement the new REST code in PHP. How well the services map to resources? Some services are mapped well i.e. there are services for creating, updating application objects. But for other application objects, there are no direct services available. More importantly, there are actions beyond just create, update etc. that apply to application objects. And I would like to provide some way for these actions to be exposed through REST. Since these are verbs, how do I deal with them? Where exactly I need help? I would appreciate any help towards high level design to accomplish the task along-with making the framework extendible. For instance, tomorrow there are some new services added to my SOA layer, I want to make sure it is easy for a fresh developer to write a REST call by simply registering a new resource (in a config file/db) and write code for connecting it with SOA calls. Just like plugin.

    Read the article

  • A Technique for Performing Cross-host Upgrades to FMW 11gR1

    - by reza.shafii
    The main tool used for the upgrade of iAS 10g mid-tier (data not stored in 10g meta-data repository schemas) environments to Fusion Middleware (FMW) 11gR1 is the FMW Upgrade Assistant (UA). This tool performs what we call an out-of-place upgrade which in a nut-shell means the following: Upgrade is performed by pointing the UA to a 10g source topology as well as an 11g destination topology. The destination topology must be created, using the standard FMW 11g installation and configuration process, prior to the execution of the UA. The UA carries over all of the required changes from the source environment to the destination. This approach has a number of advantages rooted in the fact that the source environment - which is presumably working well and serving its needs - is not disturbed during the upgrade process as the UA only performs read-only operations on it. The UA today can only perform such out-of-place upgrades when the source and destination topologies reside on the same machine. This can sometimes be an issue when the host on which the iAS 10g environment is installed is running at full capacity and installing new hardware for the purpose of the upgrade (in most cases what would be needed is extra memory) is completely infeasible. In such cases, upgrade across a different host is still possible by using the following technique: Backup your source environment and restore it on to a target machine. The backup and restore procedures for the iAS 10.1.2 components are described within this section of the release's Administration Guide. As described in the docs, the Oracle Application Server Backup and Recovery Tool provides capabilities for backing up the installation on one machine and restoring it on another which is exactly what you want to do for the purpose of cross host upgrade. Ensure that the restored environment on your target host is fully functional. Go through the upgrade steps on the target machine to perform the out-of-place upgrade using the UA. Although this process does add another big step to the overall upgrade process, it does make it possible to perform a cross-host upgrade to 11gR1 when necessary. The easiest approach would of course be to find a way of ensuring that the required hardware capacity for upgrade is available on the original 10g host. Using techniques such as scheduling the upgrade at low traffic times and/or temporarily stopping other processes running on the machine to clear up some memory might provide you the sufficient memory needed to perform the out-of-place upgrade and save you the need for using the backup/restore technique I have described in this post.

    Read the article

  • ASP.Net MVC - how to post values to the server that are not in an input element

    - by David Carter
    Problem As was mentioned in a previous blog I am building a web page that allows the user to select dates in a calendar and then shows the dates in an unordered list. The problem now is that those dates need to be sent to the server on page submit so that they can be saved to the database. If I was storing the dates in an input element, say a textbox, that wouldn't be an issue but because they are in an html element whose contents are not posted to the server an alternative strategy needs to be developed. Solution The approach that I took to solve this problem is as follows: 1. Place a hidden input field on the form <input id="hiddenDates" name="hiddenDates" type="hidden" value="" /> ASP.Net MVC has an Html helper with a method called Hidden() that will do this for you @Html.Hidden("hiddenDates"). 2. Copy the values from the html element to the hidden input field before submitting the form The following javascript is added to the page:        $(function () {          $('#formCreate').submit(function () {               PopulateHiddenDates();          });        });            function PopulateHiddenDates() {          var dateValues = '';          $($('#dateList').children('li')).each(function(index) {             dateValues += $(this).attr("id") + ",";          });          $('#hiddenDates').val(dateValues);        } I'm using jQuery to bind to the form submit event so that my method to populate the hidden field gets called before the form is submitted. The dateList element is an unordered list and by using the jQuery each function I can itterate through all the <li> items that it contains, get each items id attribute (to which I have assigned the value of the date in millisecs) and write them to the hidden field as a comma delimited string. 3. Process the dates on the server        [HttpPost]         public ActionResult Create(string hiddenDates, string utcOffset)         {            List<DateTime> dates = GetDates(hiddenDates, utcOffset);         }         private List<DateTime> GetDates(string hiddenDates, int utcOffset)         {             List<DateTime> dates = new List<DateTime>();             var values = hiddenDates.Split(",".ToCharArray(),StringSplitOptions.RemoveEmptyEntries);             foreach (var item in values)             {                 DateTime newDate = new DateTime(1970, 1, 1).AddMilliseconds(double.Parse(item)).AddMinutes(utcOffset*-1);                 dates.Add(newDate);                }             return dates;         } By declaring a parameter with the same name as the hidden field ASP.Net will take care of finding the corresponding entry in the form collection posted back to the server and binding it to the hiddenDates parameter! Excellent! I now have my dates the user selected and I can save them to the database. I have also used the same technique to pass back a utcOffset so that I know what timezone the user is in and I can show the dates correctly to users in other timezones if necessary (this isn't strictly necessary at the moment but I plan to introduce times later), Saving multiple dates from an unordered list - DONE!

    Read the article

  • Simple tips to design a Customer Journey Map

    - by Isabel F. Peñuelas
    “A model can abstract to a level that is comprehensible to humans, without getting lost in details.” -The Unified Modeling Language Reference Manual. Inception using Post-it, StoryBoards, Lego or Mindmaping Techniques The first step in a Customer Experience project is to describe customer interactions creating a customer journey map. Modeling is never easy, so to succeed on this effort, it is very convenient that your CX´s team have some “abstract thinking” skills. Besides is very helpful to consult a Business Service Design offered by an Interactive Agency to lead your inception process. Initially, you may start by a free discussion using post-it cards; storyboards; even lego or any other brainstorming technique you like. This will help you to get your mind into the path followed by the customer to purchase your product or to consume any business service you actually offer to your customers, or plan to offer in the near future. (from www.servicedesigntools.org) Colorful Mind Maps are very useful to document and share meeting ideas. Some Mind Maps software providers as ThinkBuzzan provide trial versions, and you will find more mindmapping options on this post by Mashable. Finally to produce a quick one, I do recommend Wise, an entirely online mindmaping service. On my view the best results in terms of communication will always come for an artistic hand-made drawing. Customer Experience Mind Map Example Making your first Customer Journey Map To add some more formalization to your thoughts, there is a wide offering for designing Customer Journey Maps. A Customer Map can be represented as an oriented graph in which another follows each step. The one below is the most simple Customer Journey you can draw. Nothing more than a couple of pictures, numbers and lines to design the customer steps sequence in the purchase process. Very simple Customer Journey for Social Mobile Shopping There are a lot of Customer Journey templates much more sophisticated available  in the Web using a variety of styles, as per example this one with a focus on underlining emotional experience, or this other worksheet template. Representing different interaction devices on the vertical axis, and touchpoints / requirements and existing gaps horizontally  is today´s most common format for Customer Journeys. From Customer Journey Maps to CX Technology Adoption Plans Once you have your map ready, you can start to identify the IT infrastructure requirements for your CXProject. By analyzing customer problems and improvement opportunities with maps, you will then identify the technology gaps and the new investment requirements in your IT infrastructure. Deeping step by step from the more abstract to the more concrete is the best guarantee to take the right IT investment decisions.  ¡Remember to keep your initial customer journey safe on your pocket in every one of your CX´s project meetings- that´s you map to success!

    Read the article

  • Introducing Glimpse – Firebug for your server

    - by Neil Davidson
    Here at Red Gate, we spend every waking hour trying to wow .NET and SQL developers with great products.  Every so often, though, we find something out in the wild which knocks our socks off by taking “ingeniously simple” to a whole new level.  That’s what a little community led by developers Nik Molnar and Anthony van der Hoorn has done with the open source tool Glimpse. Glimpse describes itself as ‘Firebug for the server.’  You drop the NuGet package into your ASP.NET project, and then — like magic* — your web pages will bare every detail of their execution.  Even by our high standards, it was trivial to get running: if you can use NuGet, you’re already there. You get all that lovely detail without changing any code. Our feelings go beyond respect for the developers who designed and wrote Glimpse; we’re thrilled that Nik and Anthony have come to work for Red Gate full-time. They’re going to stay in control of the project and keep doing open source development work on Glimpse.  In the medium term, we’re hoping to make paid-for products which plug into the free open source framework, especially in areas like performance profiling where we already have some deep technology.  First, though, Glimpse needs to get from beta to a v1. Given the breakneck pace of new development, this should only be a month or so away. Supporting an open source project is a first for Red Gate, so we’re going to be working with Nik and Anthony, with the Glimpse community and even with other vendors to figure out what ‘great’ looks like from the a user perspective.  Only one thing is certain: this technology deserves a wider audience than the 40,000 people who have already downloaded it, so please have a look and tell us what you think. You can hear more about what the Glimpse developers think on the Glimpse blog, and there are plenty more technical facts over at our product manager’s blog. If you have any questions or queries, please tweet with the #glimpse hashtag or contact the Glimpse team directly on [email protected]. [*That’s ”magic” in the Arthur C. Clarke “sufficiently advanced technology” sense, of course] Neil Davidson co-founder and Joint CEO Red Gate Software http://twitter.com/neildavidson    

    Read the article

  • How to become a solid python web developer [closed]

    - by Estarius
    Possible Duplicate: How do I learn Python from zero to web development? I have started Python recently with the goal to become a solid developer to make a web application eventually. However, as time goes by I am wondering if I am being optimal about how I will achieve my goal. I would compare it to a game for example, to be better you must spend time playing and trying new things... However, if you just log in and sit in the lobby chatting you are most likely not progressing. So far, this is my plan (feel free to comment or judge it): Review basic programmation concepts Start coding slowly in Python Once comfortable in Python, learn about web development in Python Learn about those things we heard about: SQLAlchemy, MVC, TDD, Git, Agile (Group project) To achieve these things, I started the Learn python the hard way exercises, which I am doing at the rate of 5 per days. I also started to read Think Python at the same time and planning to move on with Dive into python. As far as my research goes, these documentations along with Python documentation is usually what is the most recommended to learn Python. I consider this to get my point 1 and 2 done. While learning Python is really great, my goal remains to do quality web development. I know there are books about Django etc. however I would like to become comfortable with any Python web development. This means without Framework and with Framework... Any framework, then be able to choose the one which best fits our needs. For this I would like to know if some people have suggestions. Should I just get a book on Django and it should apply to everything ? What would be the best method to go from Python to Web Python and not end up creating crappy code which would turn into nightmares for other programmers ? Then finally, those "things we hear about". While I understand what they all do basically, I am fairly sure that like everything, there are good and wrong ways of making use of them. Should I go through at least a whole book on each before starting to use them or keep it at their respective online documentation ? Are there some kind of documentation which links their use to Python ? Also, from looking at Django and Pyramid they seems to use something else than MVC, while the Django model looks similar, the Pyramid one seems to cut a whole part of it... Is learning MVC still worth it ? Sorry for the wall of text, Thanks in advance !

    Read the article

  • Is hiring a "chief intern" a good idea?

    - by dukeofgaming
    I'm starting an internship program for our software department and I was wondering about creating a position ("chief intern", intern supervisor, or whatever one should call it) with the following responsibilities: Train interns Coach interns Manage projects and tasks for interns Supervise intern's work in terms of rhythm and quality Act as a liaison between the main team's needs and interns performance/aspirations Evaluate and facilitate intern's progress when they want to grab a higher-level domain-specific task (at this point, a main dev team member can do mentoring) Get freely involved in the main team's software development tasks so that he himself can grow, and have full mentorship from the main dev team. I'm thinking that an apprentice-level engineer (below Jr., or Jr.; but being a graduate and working full-time) can handle this for a while (he will be trained by the main dev team first), until one of two things happen: He/she decides to move on to the main dev team by recommending an appropriate replacement (or me finding another one as a new hire) Keep leading the interns while still being able to grow to Jr. Eng., Eng., Sr. Eng I know the notion of a "chief intern" is common within the medical world, but I don't really know about that in the software world (I was a freelancer for most of my university years). A side-intention to this is also that, if this ends up being a higher rotation position (organically) because the intern supervisor wants to join the main dev team, this could help interns that aspire this position emerge as leaders. My main intention for this, though, is removing distractions from the main team but without making the interns suffer the lack of attention, which could lead to boredom and little intern retention. Is this "chief intern" idea common (or good at least)?, are there any obvious risks to it that I might not be seeing? Edit: I have a draft plan for the kind of work the interns would be doing: Are R&D mini-projects a good activity for interns? Edit #2: My intention is not keeping them isolated, but having someone focus on giving attention to them when we cannot. Edit #3: I'm now convince it is a good idea, but I will take the organic approach to hiring someone in such position: do it myself until I cannot. This way I'll know better what to expect from a person I hire for this role in the future, as well as what works and what doesn't with interns.

    Read the article

  • ASP.Net or WPF(C#)?

    - by Rachel
    Our team is divided on this and I wanted to get some third-party opinions. We are building an application and cannot decide if we want to use .Net WPF Desktop Application with a WCF server, or ASP.Net web app using jQuery. I thought I'd ask the question here, with some specs, and see what the pros/cons of using either side would be. I have my own favorite and feel I am biased. Ideally we want to build the initial release of the software as fast as we can, then slow down and take time to build in the additional features/components we want later on. Above all we want the software to be fast. Users go through records all day long and delays in loading records or refreshing screens kills their productivity. Application Details: I'm estimating around 100 different screens for initial version, with plans for a lot of additional screens being added on later after the initial release. We are looking to use two-way communication for reminder and event systems Currently has to support around 100 users, although we've been told to allow for growth up to 500 users We have multiple locations Items to consider (maybe not initially in some cases but in future releases): Room for additional components to be added after initial release (there are a lot of of these... perhaps work here than the initial application) Keyboard navigation Performance is a must Production Speed to initial version Low maintenance overhead Future support Softphone/Scanner integration Our Developers: We have 1 programmer who has been learning WPF the past few months and was the one who suggested we use WPF for this. We have a 2nd programmer who is familiar with ASP.Net and who may help with the project in the future, although he will not be working on it much up until the initial release since his time is spent maintaining our current software. There is me, who has worked with both and am comfortable in either We have an outside company doing the project management, and they are an ASP.Net company. We plan on hiring 1-2 others, however we need to know what direction we are going in first Environment: General users are on Windows 2003 server with Terminal Services. They connect using WYSE thin-clients over an RDP connection. Admin staff has their own PCs with XP or higher. Users are allowed to specify their own resolution although they are limited to using IE as the web browser. Other locations connects to our network over a MPLS connection Based on that, what would you choose and why? I am asking here instead of SO because I am looking for opinions and not answers

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Looking for best practice for version numbering of dependent software components

    - by bit-pirate
    We are trying to decide on a good way to do version numbering for software components, which are depending on each other. Let's be more specific: Software component A is a firmware running on an embedded device and component B is its respective driver for a normal PC (Linux/Windows machine). They are communicating with each other using a custom protocol. Since, our product is also targeted at developers, we will offer stable and unstable (experimental) versions of both components (the firmware is closed-source, while the driver is open-source). Our biggest difficulty is how to handle API changes in the communication protocol. While we were implementing a compatibility check in the driver - it checks if the firmware version is compatible to the driver's version - we started to discuss multiple ways of version numbering. We came up with one solution, but we also felt like reinventing the wheel. That is why I'd like to get some feedback from the programmer/software developer community, since we think this is a common problem. So here is our solution: We plan to follow the widely used major.minor.patch version numbering and to use even/odd minor numbers for the stable/unstable versions. If we introduce changes in the API, we will increase the minor number. This convention will lead to the following example situation: Current stable branch is 1.2.1 and unstable is 1.3.7. Now, a new patch for unstable changes the API, what will cause the new unstable version number to become 1.5.0. Once, the unstable branch is considered stable, let's say in 1.5.3, we will release it as 1.4.0. I would be happy about an answer to any of the related questions below: Can you suggest a best practice for handling the issues described above? Do you think our "custom" convention is good? What changes would you apply to the described convention? Thanks a lot for your feedback! PS: Since I'm new here, I can't create new tags (e.g. best-practice). So, I'm wondering if best-pactice is just misspelled or I don't get its meaning.

    Read the article

  • Kickstarter "last minute cold feet"

    - by mm24
    today I scheduled the publication of a video on kickstarter requesting approximately 5.000 $ in order to complete the iPhone shooter game I started 1 year ago after quitting my job. I invested more than 20.000$ in the game so far (for artwork, music, legal and accountant expenses) and I am now getting cold feet about my decision of publishing the video. The game is "nearly finished", in other words: the game mechanics are working but I still have some bugs to fix. Once I will have finished this (I hope will take me 1 or 2 weeks) I plan to start working on the actual level balancing (e.g. deciding the order of appearence of enemies for each level and balancing the number of hitpoints and strenght of bullets that the enemies have). Reasons for not publishing the video are: fear that the concept can be copied easily: the game is a shooter game set in a different environment (its a pretty cool one, believe me :)) and I am worried that someone might copy* the idea (I know, its the usual "I am worried story.."). A shooter game is one of the easiest game to implement and hence there will be hundreds game developer able to copy it by just adapting their existing code and changing graphics (not as straightforward). It took me one year to develop this because I was inexperienced plus there are approximately 6/7 months of work from the illustrator and there are 8 unique music tracks composed. The soundtrack of the video is the soundtrack of the game wich is not yet published and has not been deposited to a music society. I did create legally valid timestamps for the tracks and I am considering uploading the album on iTunes before publishing the video so I can have a certain publication date. But overall I am a bit scared and worried because I have never done this before and even the simple act of publishing an album requires me to read a long contract from the "aggregator company") which, even if I do have contracts with the musicians do worry me as I am not a U.S. resident and I am not familiar with the U.S. law system Reasons for publishing the video are: I almost run out of money (but this is not a real reason as I should have enough for one more month of development time) ...I kind of need extra money as, even if I do have money for 1 month of development I do not have money for marketing and for other expenses (e.g. accountant) It will create a fan base I could get some useful feedback from a wider range of beta testers It might create some pre-release buzz in case some blogger or game magazine likes the concept Anyone has had similar experiences? Is there a real risk that someone will copy the concept and implement it in a couple of months? Will the Kickstarter campaing be a good pre-release exposure for the gmae? Any refrences of similar projects/situations? Is it realistic that someone like ROVIO will copy the idea straight away?

    Read the article

  • SQLOS and Cloud Infrastructure sessions at PASS Summit 2012

    - by SQLOS Team
    The SQL Pass Summit 2012, the largest yet, is in full swing. Here's a summary of the sessions this week on cloud infrastructure and SQLOS topics. Some of these were today, and you can catch the recordings. One more session takes place on Friday covering SQL Server solution patterns in Windows Azure VMs... Also, catch Thursday's keynote with Quentin Clark which will feature a cool IaaS demo!   SQL Server in Windows Azure VM Sessions CLD-309-A SQLCAT: Best Practices and Lessons Learned on SQL Server in an Azure VM Steve Howard, Arvind Ranasaria - Wednesday 11/6 10:15 This session looked at some best practices to optimize Networking, Memory, Disk IO and high availability based on lessons learned during SQLCat work with customer deployments. Well worth catching the recording.   SQL Server in Azure VM patterns: Hybrid Disaster Recovery, data movement and BI Guy Bowerman, Peter Saddow, Michael Washam, Ross LoForte - Friday 11/9 9:45 Rm 613 [Note: In the guides this has an outdated title.] This session has a focus on SQL Server Azure VM solutions. Starting with the basics and then going deeper into: - New features in the Microsoft Assessment and Planning Toolkit 8.0 to help plan and size SQL VM migrations.- A Look at a Windows Azure VM SQL Server app making use of load balancing and SQL Server high availability features.- A BI case study running SQL BI components in Azure VMs and making use of Windows 8 tiles.- A training class in a VM case study.   SQLOS Sessions DBA-500-HD Inside SQLOS 2012 (half-day session) Bob Ward - Wednesday 11/6 1:30pm Bob Ward from CSS applies his wealth of experience to look at the internals of SQLOS and what's changed in the various SQL 2012 components, including memory, resource governor, scheduler.   DBA-403-M: SQLCAT: Memory Manager Changes in SQL Server 2012 Gus Apostol, Jerome Halmans - 1:30pm Covers the redesigned SQLOS memory manager in SQL Server 2012 including the new page allocator for any size pages (and all that implies), DMVs, demo's. Not sure why this was placed at the same time as the SQLOS half-day session, but since it's recorded it's available for catch-up.   - Guy   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Create Your CRM Style

    - by Ruth
    Company branding can create a sense of spirit, belonging, familiarity, and fun. CRM On Demand has long offered company branding options, but now, with Release 17, those options have become quicker, easier, and more flexible. Themes (also known as Skins) allow you to customize the appearance of the CRM On Demand application for your entire company, or for individual roles. Users may also select the theme that works best for them. You can create a new theme in 5 minutes or less, but if you're anything like me, you may enjoy tinkering with it for a while longer. Before you begin tinkering, I recommend spending a few moments coming up with a design plan. If you have specific colors or logos you want for your theme, gather those first...that will move the process along much faster. If you want to match the color of an existing Web site or application, you can use tools, like Pixie, to match the HEX/HTML color values. Logos must be in a JPEG, JPG, PNG, or GIF file format. Header logos must be approximately 70 pixels high by 1680 pixels wide. Footer logos must be no more than 200 pixels wide. And, of course, you must have permission to use the images that you upload for your theme. Creating the theme itself is the simple part. Here are a few simple steps. Note: You must have the Manage Themes privilege to create custom themes. Click the Admin global link. Navigate to Application Customization Themes. Click New. Note: You may also choose to copy and edit and existing theme. Enter information for the following fields: Theme Name - Enter a name for your new theme. Show Default Help Link - Online help holds valuable information for all users, so I recommend selecting this check box. Show Default Training and Support Link - The Training and Support Center holds valuable information for all users, so I recommend selecting this check box. Description - Enter a description for your new theme. Click Save. Once you click Save, the Theme Detail page opens. From there, you can design your theme. The preview shows the Home, Detail, and List pages, with the new theme applied. For more detailed information about themes, click the Help link from any page in CRM On Demand Release 17, then search or browse to find the Creating New Themes page (Administering CRM On Demand Application Customization Creating New Themes). Click the Show Me link on that Help page to access the Creating Custom Themes quick guide. This quick guide shows how each of the page elements are defined.

    Read the article

  • Oracle Launches New Oracle Database 12c Administrator Certifications

    - by Brandye Barrington
    Today Oracle University announces the release of new Oracle Database 12c Administrator certifications. The new Oracle Database 12c certifications emphasize the foundational and advanced skills needed by Database Administrators and will prepare DBAs to leverage powerful new management and consolidation capabilities, resulting in an even more valuable credential for customers and partners. ORACLE CERTIFIED ASSOCIATE (OCA)  The Oracle Certified Associate (OCA) for Oracle Database 12c objectives measure IT professionals' mastery of day-to-day administration skills and their ability to manage the challenges they're likely to encounter on the job. This credential focuses on SQL skills, operational administration of the Oracle Database including performance and space management, and installing, patching and upgrading the Oracle Database. Earning the OCA credential requires successful completion of two exams: 1Z0-061 - Oracle Database 12c: SQL Fundamentals and 1Z0-062 - Oracle Database 12c: Installation and Administration. The OCA certification track also allows for several alternate exams which can be substituted for 1Z0-061. ORACLE CERTIFIED PROFESSIONAL (OCP) Building on the competencies in the Oracle Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c certification includes advanced knowledge and skills required of top-performing database administrators. The OCP credential focuses on developing and implementing backup and recovery strategies, designing consolidation strategies to exploit multitenant container and pluggable databases, and thorough understanding how CDB/PDBs fit into the DBaaS cloud-computing model. Today, Oracle is releasing 1Z0-060 - Upgrade to Oracle Database 12c, which allows Oracle Certified Professionals with credentials in Oracle 9i, Oracle Database 10g or Oracle Database 11g to upgrade to Oracle Database 12c with a single exam. The upgrade exam focuses on designing consolidation strategies to exploit multitenant container and pluggable databases, implementing Oracle 12c feature-rich ILM support, optimizing SQL execution using dynamic swapping of sub plans, implementing real-time data redaction within databases, as well as exploiting many additional performance, backup and recovery, security and partitioning enhancements. The exam also includes a thorough review of core DBA skills. Visit the OCP certification track for more details on the new upgrade exam as well as alternate certification paths. ORACLE CERTIFIED MASTER (OCM) The Oracle Certified Master (OCM) for Oracle Database 12c - a very challenging and elite top-level certification - certifies the most highly skilled and experienced database experts. Further information on the 12c OCM level will be announced as exam development concludes. To date, there have been more than 1.6 million Oracle certifications granted worldwide. Explore these certification tracks, exam requirements and objectives, and start toward earning your exciting new Oracle Database 12c certification credentials from Oracle.

    Read the article

  • Why You Should Attend MySQL Connect, and Register Now

    - by Bertrand Matthelié
    MySQL Connect is taking place on September 29 and 30 in San Francisco. The early bird discount enabling you to save US$ 500 is only running for a few more days, until July 13. Are you still wondering if you should sign up? Here are 10 reasons why you definitely should: Learn from other companies how they tackled similar challenges to the ones you’re facing. Find out what they learned along the way, and how you can save time, money and a lot of troubles by avoiding repeating the same mistakes and applying the best practices they’ve developed. You’ll get the chance to hear from organizations including PayPal, Verizon, Twitter, Facebook, Ticketmaster, Ning, Mozilla, CERN, Yahoo! and more! Don’t miss this unique opportunity to meet the engineers developing and supporting the MySQL products in a single location. You’ll be able to ask them all your questions, which can represent a huge time and money saver. Acquire detailed knowledge about InnoDB, the MySQL Optimizer, High Availability strategies, improving performance and scalability, enhancing security and numerous other topics. You’ll hear it straight "from the horse’s mouth" as well as from other MySQL experts in the ecosystem. Get a better understanding about Oracle’s MySQL strategy and about the MySQL roadmap, so you can better plan where to use the MySQL database and MySQL Cluster for your next web, cloud-based and other applications. Get hands-on experience about improving performance with the MySQL Performance Schema, about using MySQL Utilities, MySQL Cluster and a lot more with eight different Hands-On Labs. Express your ideas, engage into discussions and help influence the MySQL roadmap during Birds-of-a-feather sessions about replication, backup, query optimizations and other topics. Meet partners and learn about third party tools that could be useful in your architecture. Immerse yourself into the MySQL universe and hang out with MySQL experts for two days. The discussions as well as the relationships you will create can be priceless and help you execute on your next projects in a much better and faster way. Register Now to save US$500 by taking advantage of the Early bird discount running until July 13. We’ll have parallel tracks so you should consider sending a few team members to make the most of the event. Are you attending or planning to attend Oracle OpenWorld or JavaOne? You can add MySQL Connect to your registration for only US$100! Finally, it’s always a lot of fun to attend a MySQL conference. The passion and the energy are contagious…and you’ll likely get plenty of new ideas. You will find all information about the program in the MySQL Connect Content Catalog. We look forward to seeing you there! You can also read interviews with Tomas Ulin and Ronald Bradford about MySQL Connect. Sponsorship and exhibit opportunities are still available for the conference. You will find more information here.

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >