Search Results

Search found 1870 results on 75 pages for 'steve wright'.

Page 12/75 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Should OpenID clients accept adding WWW to the domain?

    - by Steve Clay
    For a long time I've used OpenID delegation on my site: http://example.org/ delegated to: http://example.openid-provider.com/, so I logged into OpenID-consuming sites using the former as ID. Recently I added www. to my site's canonical domain so http://example.org/ now redirects to http://www.example.org/. Should I be able to continue logging into existing OpenID accounts using http://example.org/? StackExchange sites say "yes". I can use either URL. At least one other doesn't recognize my existing account. Who's "right" (per spec) and is there anything I can fix on my end?

    Read the article

  • Philly.NET Code Camp

    - by Steve Michelotti
    This Saturday I will be at the Philly.NET Code Camp presenting C# 4.0.  The code camp is currently registered to capacity (800 attendees) but you will be able to view certain presentations on a Live Meeting simulcast (and later on Channel 9).  You can tune it at 3:30PM Eastern time to view my presentation. The attendee URL is here.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 1

    - by Steve Loethen
    It has been a while since I have had time to sit down and blog.  I need to make sure I take the time.  It helps me to focus on technology and not let the administrivia keep me from doing the things I love. I have been focusing on the cloud for the last couple of years.  Specifically the  PaaS platform from Microsoft called Azure.  Time to dig in.. I wanted to explore Autoscaling.  Autoscaling is not native part of Azure.  The platform has the needed connection points.  You can write code that looks at the health and performance of your application components and react to needed scaling changes.  But that means you have to write all the code.  Luckily, an add on to the Enterprise Library provides a lot of code that gets you a long way to being able to autoscale without having to start from scratch. The tool set is primarily composed of a Autoscaler object that you need to host.  This object, when hosted and configured, looks at the performance criteria you specify and adjusts your application based on your needs.  Sounds perfect. I started with the a set of HOL’s that gave me a good basis to understand the mechanics.  I worked through labs 1 and 2 just to get the feel, but let’s start our saga at the end of lab3.  Lab3 end results in a web application, hosted in Azure and a console app running on premise.  The web app has a few buttons on it.  One set adds messages to a queue, another removes them.  A second set of buttons drives processor utilization to 100%.  If you want to guess, a safe bet is that the Autoscaler is configured to react to a queue that has filled up or high cpu usage.  We will continue our saga in the next post…

    Read the article

  • New Wordpress posts generate 404 error.

    - by Steve
    I had a working installation of WordPress, and I recently encountered an issue where when I tried to login to the back-end, the browser would redirect to the login URL of the previous domain WordPress was installed on. I fixed this by reinstalling WordPress, and I can now login to the backend, but any new posts I make, or old posts I have generate 404 errors. Additionally, if I try to navigate to any category page, I again receive a 404 error. I have looked at the wp_posts table of my database, and the GUID field each contains the correct domain name and URL structure. What should I be checking here? Site in question.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 4

    - by Steve Loethen
    Now that I have the rules and services XML files in the cloud, it is time to sever the bounds of earth and live totally in the cloud.  I have to host the Autoscaling object in Azure as well, point it to the rules, tell it the management certs and get out of the way. A couple of questions.  Where to host?  The most obvious place to me was a worker role.  A simple, single purpose worker role, doing nothing but watching my app.  Here are the steps I used. 1) Created a project.  Separate project from my web site.  I wanted to be able to run the web in the cloud and the autoscaler local for debugging purposes.  Seemed like the easiest way.  2) Add the Wasabi block to the project. 3) Configure the settings.  I used the same settings used for the console app.  It points to the same web role, uses the same rules file.  4) Make sure the certification needed to manage the role is added to the cert store in the sky (“LocalMachine” and “My” are default locations). I ran the worker role in the local fabric.  It worked.  I then published to the cloud, and verified it worked again.  Here is what my code looked like. public override bool OnStart() { Trace.WriteLine("Set Default Connection Limit", "Information"); // Set the maximum number of concurrent connections ServicePointManager.DefaultConnectionLimit = 12; Trace.WriteLine("Set up configuration change code", "Information"); // set up config CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))); Trace.WriteLine("Get current diagnostic configuration", "Information"); // Get current diagnostic configuration DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration(); Trace.WriteLine("Set Diagnostic Buffer Size", "Information"); // Set Diagnostic Buffer size dmc.Logs.BufferQuotaInMB = 4; Trace.WriteLine("Set log transfer period", "Information"); // Set log transfer period dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1); Trace.WriteLine("Set log verbosity", "Information"); // Set log filter to verbose dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; Trace.WriteLine("Start the diagnostic monitor", "Information"); // Start the diagnostic monitor DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc); Trace.WriteLine("Get the current Autoscaler from the EntLib Container", "Information"); // Get the current Autoscaler from the EntLib Container scaler = EnterpriseLibraryContainer.Current.GetInstance<Autoscaler>(); Trace.WriteLine("Start the autoscaler", "Information"); // Start the autoscaler scaler.Start(); Trace.WriteLine("call the base class OnStart", "Information"); // call the base class OnStart return base.OnStart(); } public override void OnStop() { Trace.WriteLine("Stop the Autoscaler", "Information"); // Stop the Autoscaler scaler.Stop(); } I did have to turn on some basic logging for wasabi, which will cover in the next post.  This let me figure out that I hadn’t done the certificate step.

    Read the article

  • Game physics presentation by Richard Lord, some questions

    - by Steve
    I been implementing (in XNA) the examples in this physics presentation by Richard Lord where he discusses various integration techniques. Bearing in mind that I am a newcomer to game physics (and physics in general) I have some questions. 15 slides in he shows ActionScript code for a gravity example and an animation showing a bouncing ball. The ball bounces higher and higher until it is out of control. I implemented the same in C# XNA but my ball appeared to be bouncing at a constant height. The same applies to the next example where the ball bounces lower and lower. After some experimentation I found that if I switched to a fixed timestep and then on the first iteration of Update() I set the time variable to be equal to elapsed milliseconds (16.6667) I would see the same behaviour. Doing this essentially set the framerate, velocity and acceleration to zero for the first update and introduced errors(?) into the algorithm causing the ball's velocity to increase (or decrease) over time. I think! My question is, does this make the integration method used poor? Or is it demonstrating that it is poor when used with variable timestep because you can't pass in a valid value for the first lot of calculations? (because you cannot know the framerate in advance). I will continue my research into physics but can anyone suggest a good method to get my feet wet? I would like to experiment with variable timestep, acceleration that changes over time and probably friction. Would the Time Corrected Verlet be OK for this?

    Read the article

  • Attracting publishers to an in-house affiliate program

    - by Steve
    The cost to enter affiliate networks can be prohibitive for (cash-strapped) small business, particularly if they are simply testing the waters of an affiliate program. If such a company wanted to run an affiliate program in-house using off the shelf software, what methods would they use to attract publishers? Is it simply a case of SEO or SEM, attempting to attract publishers to the page on their website which outlines their affiliate program? Are there directories to submit one's affiliate program to?

    Read the article

  • Autoscaling in a modern world&hellip;. last chapter

    - by Steve Loethen
    As we all know as coders, things like logging are never important.  Our code will work right the first time.  So, you can understand my surprise when the first time I deployed the autoscaling worker role to the actual Azure fabric, it did not scale.  I mean, it worked on my machine.  How dare the datacenter argue with that.  So, how did I track down the problem?  (turns out, it was not so much code as lack of the right certificate)  When I ran it local in the developer fabric, I was able to see a wealth of information.  Lots of periodic status info every time the autoscalar came around to check on my rules and decide to act or not.  But that information was not making it to Azure storage.  The diagnostics were not being transferred to where I could easily see and use them to track down why things were not being cooperative.  After a bit of digging, I discover the problem.  You need to add a bit of extra configuration code to get the correct information stored for you.  I added the following to my app.config: Code Snippet <system.diagnostics>     <sources>         <source name="Autoscaling General"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>         <source name="Autoscaling Updates"switchName="SourceSwitch"           switchType="System.Diagnostics.SourceSwitch" >         <listeners>           <add name="AzureDiag" />             <remove name="Default"/>         </listeners>       </source>     </sources>     <switches>       <add name="SourceSwitch"           value="Verbose, Information, Warning, Error, Critical" />     </switches>     <sharedListeners>       <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiag"/>     </sharedListeners>     <trace>       <listeners>         <add             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">           <filter type="" />         </add>       </listeners>     </trace>   </system.diagnostics> Suddenly all the rich tracing info I needed was filling up my storage account.  After a few cycles of trying to attempting to scale, I identified the cert problem, uploaded a correct certificate, and away it went.  I hope this was helpful.

    Read the article

  • .BasePermissions enumerations options in Microsoft.SharePoint.SPRoleDefinition

    - by steve schofield
    I was trying to add a new permission level to Sharepoint 2007 and needed to know all the enumeration options.    Here is the list...  Specify one of the following enumeration values and try again. The possible enumeration values are "EmptyMask, ViewListItems, AddListItems, EditListItems, DeleteListItems, ApproveItems, OpenItems, ViewVersions, DeleteVersions, CancelCheckout, ManagePersonalViews, ManageLists, ViewFormPages, Open, ViewPages, AddAndCustomizePages, ApplyThemeAndBorder, ApplyStyleSheets, ViewUsageData, CreateSSCSite, ManageSubwebs, CreateGroups, ManagePermissions,BrowseDirectories, BrowseUserInfo, AddDelPrivateWebParts, UpdatePersonalWebParts, ManageWeb, UseClientIntegration, UseRemoteAPIs, ManageAlerts, CreateAlerts, EditMyUserInfo, EnumeratePermissions, FullMask"."

    Read the article

  • Do You Want "Normal?" Good luck!

    - by steve.diamond
    Much has been written about "The New Normal." One thing is for sure: whatever THAT is, economically speaking we won't be experiencing it anytime soon. Sure, we're well beyond the "no floor" perception of 18 months ago--which is certainly comforting, but ask any senior executive and they'll tell you of the constant rigor necessary to continually adapt to an ever-changing macro environment. This brings me to a suggestion that you tune in to a Deloitte Webinar titled, "The New Normal: Embrace Complexity or Seek to Simplify." It features the perspectives on this very topic of Jessica Blume, a principal at Deloitte; and Kirk Mosher, VP of CRM Marketing at Oracle.

    Read the article

  • Total Cloud Control keeps getting better ! Oracle Launch Webcast : Total Cloud Control for Systems

    - by Anand Akela
    Total Cloud Control Keeps Getting Better Join Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executives to find out how your enterprise cloud can achieve 10x improved performance and 12x operational agility. Only Oracle Enterprise Manager Ops Center 12c allows you to: Accelerate mission-critical cloud deployment Unleash the power of Solaris 11, the first cloud OS Simplify Oracle engineered systems management You’ll also get a chance to have your questions answered by Oracle product experts and dive deeper into the technology by viewing our demos that trace the steps companies like yours take as they transition to a private cloud environment. Featured Speaker With a special announcement by: Steve Wilson Vice President, Systems Management, Oracle John Fowler Executive Vice President, Systems, Oracle Agenda 9:00 a.m. PT Keynote: Total Cloud Control for Systems 9:45 a.m. PT Panel Discussion with Oracle Hardware, Software, and Support Executives 10:15 a.m. PT Demo Series: A Step-by-Step Journey to Enterprise Clouds Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Autoscaling in a modern world&hellip;. Part 2

    - by Steve Loethen
    When we last left off, we had a web application spinning away in the cloud, and a local console application watching it and reacting to changes in demand.  Reactions that were specified by a set of rules.  Let’s talk about those rules. Constraints.  The first set of rules this application answered to were the constraints. Here is what they looked like: <constraintRules> <rule name="default" enabled="true" rank="1" description="The default constraint rule"> <actions> <range min="1" max="4" target="AutoscalingApplicationRole"/> </actions> </rule> </constraintRules> Pretty basic.  We have one role, the “AutoscalingApplicationRole”, and we have decided to have it live within a range of 1 to 4.  This rule does not adjust, but instead, set’s limits on what other rules can do.  It has a rank, so you can have you can specify other sets of constraints, perhaps based on time or date, to allow for deviations from this set.  But for now, let’s keep it simple.  In the real world, you would probably use the minimum to set a lower end SLA.  A common value might be a 2, to prevent the reactive rules from ever taking you down to 1 role.  The maximum is often used to keep a rule from driving the cost up, setting an upper limit to prevent you waking up one morning and find a bill for hundreds of instances you didn’t expect.  So, here we have the range we want our application to live inside.  This is good for our investigation and testing.  Next, let’s take a look at the reactive rules.  These rules are what you use to react (hence reactive rules) to changing demands on your application.  The HOL has two simple rules.  One that looks at a queue depth, and one that looks at a performance counter that reports cpu utilization.  the XML in the rules file looks like this: <reactiveRules> <rule name="ScaleUp" rank="10" description="Scale Up the web role" enabled="true"> <when> <any> <greaterOrEqual operand="Length_05_holqueue" than="10"/> <greaterOrEqual operand="CPU_05_holwebrole" than="65"/> </any> </when> <actions> <scale target="AutoscalingApplicationRole" by="1"/> </actions> </rule> <rule name="ScaleDown" rank="10" description="Scale down the web role" enabled="true"> <when> <all> <less operand="Length_05_holqueue" than="5"/> <less operand="CPU_05_holwebrole" than="40"/> </all> </when> <actions> <scale target="AutoscalingApplicationRole" by="-1"/> </actions> </rule> </reactiveRules> <operands> <performanceCounter alias="CPU_05_holwebrole" performanceCounterName="\Processor(_Total)\% Processor Time" source="AutoscalingApplicationRole" timespan="00:05:00" aggregate="Average" /> <queueLength alias="Length_05_holqueue" queue="hol-queue" timespan="00:05:00" aggregate="Average"/> </operands> These rules are currently contained in a file called rules.xml, that is in the root of the console application.  The console app, starts up, grabs the rules and starts watching the 2 operands.  When it detects a rule has been satisfied, it performs the desired action.  (here, scale up or down my 1). But I want to host the autoscaler  in the cloud.  For my first trick, I will move the rules (and another file called services.xml) to azure blob storage.  Look for part 3.

    Read the article

  • Oracle OpenWorld Healthcare Integration Session Highlights Challenges & Solutions

    - by Bruce Tierney
    In today’s session co-presented by Steve Schenks, Integration Architect from Ascension Health and Oracle’s Sundar Shenbagam and Suresh Sharma (apparently your initials must be SS to present during this session), interesting insights in many different areas including Steve’s descriptions of the challenges with their previous environment: Disparate hardware and software is an issue common across healthcare and most other industries…Larry Ellison spoke on this topic during Sundays’ keynote address. In the last part of session, Suresh is planning to go over some of the best practices and lesson learned to implement successful healthcare applications and will discuss the different options to model Sequencing (FIF0) use cases (one of most common use cases in the provider market). The session was “Implementing Successful Healthcare Applications with Oracle SOA Suite” – Session # CON8546. For more information about this session, please contact Senior Principal Product Manager Suresh Sharma

    Read the article

  • Oracle OpenWorld Healthcare Integration Session Highlights Challenges & Solutions

    - by Nitesh Jain
    In today’s session co-presented by Steve Schenks, Integration Architect from Ascension Health and Oracle’s Sundar Shenbagam and Suresh Sharma (apparently your initials must be SS to present during this session), interesting insights in many different areas including Steve’s descriptions of the challenges with their previous environment: Disparate hardware and software is an issue common across healthcare and most other industries…Larry Ellison spoke on this topic during Sundays’ keynote address.  In the last part of session, Suresh is planning to go over some of the best practices and lesson learned to implement successful healthcare applications and will discuss the different options to model Sequencing (FIF0) use cases (one of most common use cases in the provider market). The session was “Implementing Successful Healthcare Applications with Oracle SOA Suite” – Session # CON8546. For more information about this session, please contact Senior Principal Product Manager Suresh Sharma Ref : https://blogs.oracle.com/SOA/entry/oracle_openworld_healthcare_integration_session

    Read the article

  • Awesome new feature for HCC

    - by Steve Tunstall
    I've talked about HCC (Hybrid Columnar Compression) before. This is Oracle's built-in compression feature, free of charge in 11Gr2, that allows a CRAZY amount of compression on historical data inside an Oracle database. It only works if the database is being stored in a ZFSSA, Exadata or Axiom. You can read all about it in this whitepaper, which shows the huge value of HCC when used with the ZFSSA. http://www.oracle.com/technetwork/articles/servers-storage-admin/perf-hybrid-columnar-compression-1689701.html Now, even better, Oracle has announced  a great new feature in Oracle 12c called "Automatic Data Optimization". This allows one to setup HCC to AUTOMATICALLY compress data AS IT AGES.  So this is now ILM all built into the Oracle database. It's free for crying out loud. It just needs to be sitting on Oracle storage, such as the ZFSSA, Exadata or Axiom.  Read about ADO here: http://www.oracle.com/technetwork/database/automatic-data-optimization-wp-12c-1896120.pdf?ssSourceSiteId=ocomen

    Read the article

  • A Generic, IDisposable WCF Service Client

    - by Steve Wilkes
    WCF clients need to be cleaned up properly, but as they're usually auto-generated they don't implement IDisposable. I've been doing a fair bit of WCF work recently, so I wrote a generic WCF client wrapper which effectively gives me a disposable service client. The ServiceClientWrapper is constructed using a WebServiceConfig instance, which contains a Binding, an EndPointAddress, and whether the client should ignore SSL certificate errors - pretty useful during testing! The Binding can be created based on configuration data or entirely programmatically - that's not the client's concern. Here's the service client code: using System; using System.Net; using System.Net.Security; using System.ServiceModel; public class ServiceClientWrapper<TService, TChannel> : IDisposable     where TService : ClientBase<TChannel>     where TChannel : class {     private readonly WebServiceConfig _config;     private TService _serviceClient;     public ServiceClientWrapper(WebServiceConfig config)     {         this._config = config;     }     public TService CreateServiceClient()     {         this.DisposeExistingServiceClientIfRequired();         if (this._config.IgnoreSslErrors)         {             ServicePointManager.ServerCertificateValidationCallback =                 (obj, certificate, chain, errors) => true;         }         else         {             ServicePointManager.ServerCertificateValidationCallback =                 (obj, certificate, chain, errors) => errors == SslPolicyErrors.None;         }         this._serviceClient = (TService)Activator.CreateInstance(             typeof(TService),             this._config.Binding,             this._config.Endpoint);         if (this._config.ClientCertificate != null)         {             this._serviceClient.ClientCredentials.ClientCertificate.Certificate =                 this._config.ClientCertificate;         }         return this._serviceClient;     }     public void Dispose()     {         this.DisposeExistingServiceClientIfRequired();     }     private void DisposeExistingServiceClientIfRequired()     {         if (this._serviceClient != null)         {             try             {                 if (this._serviceClient.State == CommunicationState.Faulted)                 {                     this._serviceClient.Abort();                 }                 else                 {                     this._serviceClient.Close();                 }             }             catch             {                 this._serviceClient.Abort();             }             this._serviceClient = null;         }     } } A client for a particular service can then be created something like this: public class ManagementServiceClientWrapper :     ServiceClientWrapper<ManagementServiceClient, IManagementService> {     public ManagementServiceClientWrapper(WebServiceConfig config)         : base(config)     {     } } ...where ManagementServiceClient is the auto-generated client class, and the IManagementService is the auto-generated WCF channel class - and used like this: using(var serviceClientWrapper = new ManagementServiceClientWrapper(config)) {     serviceClientWrapper.CreateServiceClient().CallService(); } The underlying WCF client created by the CreateServiceClient() will be disposed after the using, and hey presto - a disposable WCF service client.

    Read the article

  • Ubuntu USB boot failure

    - by Steve
    When trying to boot from a boot USB drive I got the message, "Vesamenu.c32:Not a com32r image." I was trying to boot a fairly new Toshiba laptop with a Ubuntu 10.04.2 LTS created USB boot. I re-created the USB drive with 11.04 and it booted fine. These were both 32 bit versions even though the laptop is a 64 bit. I was trying to create a generic boot USB that would work on everything I might try it on. What is the consensus on this idea? Any solution to the above error? Thanks from a noobe.

    Read the article

  • Culture Shmulture?

    - by steve.diamond
    I've been thinking about "Customer Experience Management" lately. Here at Oracle, we arguably have the most complete suite of applications for managing the customer experience across and in the context of multiple channels -- from marketing to loyalty to contact center to self-service to analytics offerings, and more. And stay tuned, because in coming months let's just say we'll have even more to talk about on this front. But that said............ Last weekend my wife and I stayed at one of the premiere hotel chains on the planet. I won't name them, but we all know the short list. It could have been the St. Regis or the Ritz Carlton or Four Seasons or Hyatt Park or....This stay, at this particular hotel, was simply outstanding. Within a chain known for providing "above and beyond" levels of service, this particular hotel, under this particular manager, exceeded expectations on so many fronts. For example, at the Spa we mentioned to the two attendants that my wife is seven months pregnant and that we had previously had a lot of trouble conceiving. We then went to our room. Ten minutes later we heard a knock at the door and received a plate of chocolate covered strawberries with a heartfelt note and an inspiring quote, signed by the two spa attendees. The following day we arranged to have a bellhop drive us to the beach. Although they had a pre-arranged beach shuttle service with time limits, etc., he greeted us by saying, "I'm yours for the day until 4 p.m. Whatever you want to do is fine by me, as long as it's legal!" The morning that we left we arranged to have a taxi drive us to the airport--a nearly 40 mile drive. What showed up was a private coach complete with navy blue suited driver dude. And we were charged the taxi fare price. And there were many other awesome exchanges I won't mention here, although I did email the GM of this hotel two nights ago and expressed our effusive praise and gratitude. I'd submit that this hotel chain would have a definitive advantage using even more Oracle software to manage and optimize its customer interactions (yes, they are a customer). But WITHOUT the culture--that management team--and that instillation of aligned values across all employees of exemplifying 'the golden rule,' I wonder how much technology really matters in providing a distinctively positive and memorable customer experience. Lest you think I'm alone in these pontifications, have you read Paul Greenberg's blog lately? Have you seen one of his most recent posts? Now this SPECIFIC post is NOT about customer service per se. But it is about people. So yes, please think long and hard about the technology you seek to deploy. But never forget who will be interacting with your systems, and your customers.

    Read the article

  • C# 4 Named Parameters for Overload Resolution

    - by Steve Michelotti
    C# 4 is getting a new feature called named parameters. Although this is a stand-alone feature, it is often used in conjunction with optional parameters. Last week when I was giving a presentation on C# 4, I got a question on a scenario regarding overload resolution that I had not considered before which yielded interesting results. Before I describe the scenario, a little background first. Named parameters is a well documented feature that works like this: suppose you have a method defined like this: 1: void DoWork(int num, string message = "Hello") 2: { 3: Console.WriteLine("Inside DoWork() - num: {0}, message: {1}", num, message); 4: } This enables you to call the method with any of these: 1: DoWork(21); 2: DoWork(num: 21); 3: DoWork(21, "abc"); 4: DoWork(num: 21, message: "abc"); and the corresponding results will be: Inside DoWork() - num: 21, message: Hello Inside DoWork() - num: 21, message: Hello Inside DoWork() - num: 21, message: abc Inside DoWork() - num: 21, message: abc This is all pretty straight forward and well-documented. What is slightly more interesting is how resolution is handled with method overloads. Suppose we had a second overload for DoWork() that looked like this: 1: void DoWork(object num) 2: { 3: Console.WriteLine("Inside second overload: " + num); 4: } The first rule applied for method overload resolution in this case is that it looks for the most strongly-type match first.  Hence, since the second overload has System.Object as the parameter rather than Int32, this second overload will never be called for any of the 4 method calls above.  But suppose the method overload looked like this: 1: void DoWork(int num) 2: { 3: Console.WriteLine("Inside second overload: " + num); 4: } In this case, both overloads have the first parameter as Int32 so they both fulfill the first rule equally.  In this case the overload with the optional parameters will be ignored if the parameters are not specified. Therefore, the same 4 method calls from above would result in: Inside second overload: 21 Inside second overload: 21 Inside DoWork() - num: 21, message: abc Inside DoWork() - num: 21, message: abc Even all this is pretty well documented. However, we can now consider the very interesting scenario I was presented with. The question was what happens if you change the parameter name in one of the overloads.  For example, what happens if you change the parameter *name* for the second overload like this: 1: void DoWork(int num2) 2: { 3: Console.WriteLine("Inside second overload: " + num2); 4: } In this case, the first 2 method calls will yield *different* results: 1: DoWork(21); 2: DoWork(num: 21); results in: Inside second overload: 21 Inside DoWork() - num: 21, message: Hello We know the first method call will go to the second overload because of normal method overload resolution rules which ignore the optional parameters.  But for the second call, even though all the same rules apply, the compiler will allow you to specify a named parameter which, in effect, overrides the typical rules and directs the call to the first overload. Keep in mind this would only work if the method overloads had different parameter names for the same types (which in itself is weird). But it is a situation I had not considered before and it is one in which you should be aware of the rules that the C# 4 compiler applies.

    Read the article

  • Fixing Chrome&rsquo;s AJAX Request Caching Bug

    - by Steve Wilkes
    I recently had to make a set of web pages restore their state when the user arrived on them after clicking the browser’s back button. The pages in question had various content loaded in response to user actions, which meant I had to manually get them back into a valid state after the page loaded. I got hold of the page’s data in a JavaScript ViewModel using a JQuery ajax call, then iterated over the properties, filling in the fields as I went. I built in the ability to describe dependencies between inputs to make sure fields were filled in in the correct order and at the correct time, and that all worked nicely. To make sure the browser didn’t cache the AJAX call results I used the JQuery’s cache: false option, and ASP.NET MVC’s OutputCache attribute for good measure. That all worked perfectly… except in Chrome. Chrome insisted on retrieving the data from its cache. cache: false adds a random query string parameter to make the browser think it’s a unique request – it made no difference. I made the AJAX call a POST – it made no difference. Eventually what I had to do was add a random token to the URL (not the query string) and use MVC routing to deliver the request to the correct action. The project had a single Controller for all AJAX requests, so this route: routes.MapRoute( name: "NonCachedAjaxActions", url: "AjaxCalls/{cacheDisablingToken}/{action}", defaults: new { controller = "AjaxCalls" }, constraints: new { cacheDisablingToken = "[0-9]+" }); …and this amendment to the ajax call: function loadPageData(url) { // Insert a timestamp before the URL's action segment: var indexOfFinalUrlSeparator = url.lastIndexOf("/"); var uniqueUrl = url.substring(0, indexOfFinalUrlSeparator) + new Date().getTime() + "/" + url.substring(indexOfFinalUrlSeparator); // Call the now-unique action URL: $.ajax(uniqueUrl, { cache: false, success: completePageDataLoad }); } …did the trick.

    Read the article

  • Part 1 - 12c Database and WLS - Overview

    - by Steve Felts
    The download of Oracle 12c database became available on June 25, 2013.  There are some big new features in 12c database and WebLogic Server will take advantage of them. Immediately, we will support using 12c database and drivers with WLS 10.3.6 and 12.1.1.  When the next version of WLS ships, additional functionality will be supported (those rows in the table below with all "No" values will get a "Yes).  The following table maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases. Feature WebLogic Server 10.3.6/12.1.1 with 11g drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 11g drivers and 12c DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 12c DB JDBC replay No No No Yes (Active GridLink only in 10.3.6, add generic in 12.1.1) Multi Tenant Database No Yes (except set container) No Yes (except set container) Dynamic switching between Tenants No No No No Database Resident Connection pooling (DRCP) No No No No Oracle Notification Service (ONS) auto configuration No No No No Global Database Services (GDS) No Yes (Active GridLink only) No Yes (Active GridLink only) JDBC 4.1 (using ojdbc7.jar files & JDK 7) No No Yes Yes  The My Oracle Support (MOS) document covering this is "WebLogic Server 12.1.1 and 10.3.6 Support for Oracle 12c Database [ID 1564509.1]" at the link https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1. The following documents are also key references:12c Oracle Database Developer Guide http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/toc.htm 12c Oracle Database Administrator's Guide http://docs.oracle.com/cd/E16655_01/server.121/e17636/toc.htm . I plan to write some related blog articles not to duplicate existing product documentation but to introduce the features, provide some examples, and tie together some information to make it easier to understand. How do you get started with 12c?  The easiest way is to point your data source at a 12c database.  The only change on the WLS side is to update the URL in your data source (assuming that you are not just upgrading your database).  You can continue to use the 11.2.0.3 driver jar files that shipped with WLS 10.3.6 or 12.1.1.  You shouldn't see any changes in your application.  You can take advantage of enhancements on the database side that don't affect the mid-tier.  On the WLS side, you can take advantage of using Global Data Service or connecting to a tenant in a multi-tenant database transparently. If you want to use the 12c client jar files, it's a bit of work because they aren't shipped with WLS and you can't just drop in ojdbc6.jar as in the old days.  You need to use a matched set of jar files and they need to come before existing jar files in the CLASSPATH.  The MOS article is written from the standpoint that you need to get the jar files directly - download almost 1G and install over 600M footprint to get 15 jar files.  Assuming that you have the database installed and you can get access to the installation (or ask the DBA), you need to copy the 15 jar files to each machine with a WLS installation and get them in your CLASSPATH.  You can play with setting the PRE_CLASSPATH but the more practical approach may be to just update WL_HOME/common/bin/commEnv.sh directly.  There's a change in the transaction completion behavior (read the MOS) so if you think you might run into that, you will want to set -Doracle.jdbc.autoCommitSpecCompliant=false.  Also if you are running with Active GridLink, you must set -Doracle.ucp.PreWLS1212Compatible=true (how's that for telling you that this is fixed in WLS 12.1.2).  Once you get the configuration out of the way, you can start using the new ojdbc7.jar in place of the ojdbc6.jar to get the new JDBC 4.1 API's.  You can also start using Application Continuity.  This feature is also known as JDBC Replay because when a connection fails you get a new one with all JDBC operations up to the failure point automatically replayed.  As you might expect, there are some limitations but it's an interesting feature.  Obviously I'm going to focus on the 12c database features that we can leverage in WLS data source.  You will need to read other sources or the product documentation to get all of the new features.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >