Search Results

Search found 2174 results on 87 pages for 'steve jones'.

Page 14/87 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • .BasePermissions enumerations options in Microsoft.SharePoint.SPRoleDefinition

    - by steve schofield
    I was trying to add a new permission level to Sharepoint 2007 and needed to know all the enumeration options.    Here is the list...  Specify one of the following enumeration values and try again. The possible enumeration values are "EmptyMask, ViewListItems, AddListItems, EditListItems, DeleteListItems, ApproveItems, OpenItems, ViewVersions, DeleteVersions, CancelCheckout, ManagePersonalViews, ManageLists, ViewFormPages, Open, ViewPages, AddAndCustomizePages, ApplyThemeAndBorder, ApplyStyleSheets, ViewUsageData, CreateSSCSite, ManageSubwebs, CreateGroups, ManagePermissions,BrowseDirectories, BrowseUserInfo, AddDelPrivateWebParts, UpdatePersonalWebParts, ManageWeb, UseClientIntegration, UseRemoteAPIs, ManageAlerts, CreateAlerts, EditMyUserInfo, EnumeratePermissions, FullMask"."

    Read the article

  • Do You Want "Normal?" Good luck!

    - by steve.diamond
    Much has been written about "The New Normal." One thing is for sure: whatever THAT is, economically speaking we won't be experiencing it anytime soon. Sure, we're well beyond the "no floor" perception of 18 months ago--which is certainly comforting, but ask any senior executive and they'll tell you of the constant rigor necessary to continually adapt to an ever-changing macro environment. This brings me to a suggestion that you tune in to a Deloitte Webinar titled, "The New Normal: Embrace Complexity or Seek to Simplify." It features the perspectives on this very topic of Jessica Blume, a principal at Deloitte; and Kirk Mosher, VP of CRM Marketing at Oracle.

    Read the article

  • Autoscaling in a modern world&hellip;. Part 2

    - by Steve Loethen
    When we last left off, we had a web application spinning away in the cloud, and a local console application watching it and reacting to changes in demand.  Reactions that were specified by a set of rules.  Let’s talk about those rules. Constraints.  The first set of rules this application answered to were the constraints. Here is what they looked like: <constraintRules> <rule name="default" enabled="true" rank="1" description="The default constraint rule"> <actions> <range min="1" max="4" target="AutoscalingApplicationRole"/> </actions> </rule> </constraintRules> Pretty basic.  We have one role, the “AutoscalingApplicationRole”, and we have decided to have it live within a range of 1 to 4.  This rule does not adjust, but instead, set’s limits on what other rules can do.  It has a rank, so you can have you can specify other sets of constraints, perhaps based on time or date, to allow for deviations from this set.  But for now, let’s keep it simple.  In the real world, you would probably use the minimum to set a lower end SLA.  A common value might be a 2, to prevent the reactive rules from ever taking you down to 1 role.  The maximum is often used to keep a rule from driving the cost up, setting an upper limit to prevent you waking up one morning and find a bill for hundreds of instances you didn’t expect.  So, here we have the range we want our application to live inside.  This is good for our investigation and testing.  Next, let’s take a look at the reactive rules.  These rules are what you use to react (hence reactive rules) to changing demands on your application.  The HOL has two simple rules.  One that looks at a queue depth, and one that looks at a performance counter that reports cpu utilization.  the XML in the rules file looks like this: <reactiveRules> <rule name="ScaleUp" rank="10" description="Scale Up the web role" enabled="true"> <when> <any> <greaterOrEqual operand="Length_05_holqueue" than="10"/> <greaterOrEqual operand="CPU_05_holwebrole" than="65"/> </any> </when> <actions> <scale target="AutoscalingApplicationRole" by="1"/> </actions> </rule> <rule name="ScaleDown" rank="10" description="Scale down the web role" enabled="true"> <when> <all> <less operand="Length_05_holqueue" than="5"/> <less operand="CPU_05_holwebrole" than="40"/> </all> </when> <actions> <scale target="AutoscalingApplicationRole" by="-1"/> </actions> </rule> </reactiveRules> <operands> <performanceCounter alias="CPU_05_holwebrole" performanceCounterName="\Processor(_Total)\% Processor Time" source="AutoscalingApplicationRole" timespan="00:05:00" aggregate="Average" /> <queueLength alias="Length_05_holqueue" queue="hol-queue" timespan="00:05:00" aggregate="Average"/> </operands> These rules are currently contained in a file called rules.xml, that is in the root of the console application.  The console app, starts up, grabs the rules and starts watching the 2 operands.  When it detects a rule has been satisfied, it performs the desired action.  (here, scale up or down my 1). But I want to host the autoscaler  in the cloud.  For my first trick, I will move the rules (and another file called services.xml) to azure blob storage.  Look for part 3.

    Read the article

  • Total Cloud Control keeps getting better ! Oracle Launch Webcast : Total Cloud Control for Systems

    - by Anand Akela
    Total Cloud Control Keeps Getting Better Join Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executives to find out how your enterprise cloud can achieve 10x improved performance and 12x operational agility. Only Oracle Enterprise Manager Ops Center 12c allows you to: Accelerate mission-critical cloud deployment Unleash the power of Solaris 11, the first cloud OS Simplify Oracle engineered systems management You’ll also get a chance to have your questions answered by Oracle product experts and dive deeper into the technology by viewing our demos that trace the steps companies like yours take as they transition to a private cloud environment. Featured Speaker With a special announcement by: Steve Wilson Vice President, Systems Management, Oracle John Fowler Executive Vice President, Systems, Oracle Agenda 9:00 a.m. PT Keynote: Total Cloud Control for Systems 9:45 a.m. PT Panel Discussion with Oracle Hardware, Software, and Support Executives 10:15 a.m. PT Demo Series: A Step-by-Step Journey to Enterprise Clouds Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Oracle OpenWorld Healthcare Integration Session Highlights Challenges & Solutions

    - by Bruce Tierney
    In today’s session co-presented by Steve Schenks, Integration Architect from Ascension Health and Oracle’s Sundar Shenbagam and Suresh Sharma (apparently your initials must be SS to present during this session), interesting insights in many different areas including Steve’s descriptions of the challenges with their previous environment: Disparate hardware and software is an issue common across healthcare and most other industries…Larry Ellison spoke on this topic during Sundays’ keynote address. In the last part of session, Suresh is planning to go over some of the best practices and lesson learned to implement successful healthcare applications and will discuss the different options to model Sequencing (FIF0) use cases (one of most common use cases in the provider market). The session was “Implementing Successful Healthcare Applications with Oracle SOA Suite” – Session # CON8546. For more information about this session, please contact Senior Principal Product Manager Suresh Sharma

    Read the article

  • Oracle OpenWorld Healthcare Integration Session Highlights Challenges & Solutions

    - by Nitesh Jain
    In today’s session co-presented by Steve Schenks, Integration Architect from Ascension Health and Oracle’s Sundar Shenbagam and Suresh Sharma (apparently your initials must be SS to present during this session), interesting insights in many different areas including Steve’s descriptions of the challenges with their previous environment: Disparate hardware and software is an issue common across healthcare and most other industries…Larry Ellison spoke on this topic during Sundays’ keynote address.  In the last part of session, Suresh is planning to go over some of the best practices and lesson learned to implement successful healthcare applications and will discuss the different options to model Sequencing (FIF0) use cases (one of most common use cases in the provider market). The session was “Implementing Successful Healthcare Applications with Oracle SOA Suite” – Session # CON8546. For more information about this session, please contact Senior Principal Product Manager Suresh Sharma Ref : https://blogs.oracle.com/SOA/entry/oracle_openworld_healthcare_integration_session

    Read the article

  • Awesome new feature for HCC

    - by Steve Tunstall
    I've talked about HCC (Hybrid Columnar Compression) before. This is Oracle's built-in compression feature, free of charge in 11Gr2, that allows a CRAZY amount of compression on historical data inside an Oracle database. It only works if the database is being stored in a ZFSSA, Exadata or Axiom. You can read all about it in this whitepaper, which shows the huge value of HCC when used with the ZFSSA. http://www.oracle.com/technetwork/articles/servers-storage-admin/perf-hybrid-columnar-compression-1689701.html Now, even better, Oracle has announced  a great new feature in Oracle 12c called "Automatic Data Optimization". This allows one to setup HCC to AUTOMATICALLY compress data AS IT AGES.  So this is now ILM all built into the Oracle database. It's free for crying out loud. It just needs to be sitting on Oracle storage, such as the ZFSSA, Exadata or Axiom.  Read about ADO here: http://www.oracle.com/technetwork/database/automatic-data-optimization-wp-12c-1896120.pdf?ssSourceSiteId=ocomen

    Read the article

  • A Generic, IDisposable WCF Service Client

    - by Steve Wilkes
    WCF clients need to be cleaned up properly, but as they're usually auto-generated they don't implement IDisposable. I've been doing a fair bit of WCF work recently, so I wrote a generic WCF client wrapper which effectively gives me a disposable service client. The ServiceClientWrapper is constructed using a WebServiceConfig instance, which contains a Binding, an EndPointAddress, and whether the client should ignore SSL certificate errors - pretty useful during testing! The Binding can be created based on configuration data or entirely programmatically - that's not the client's concern. Here's the service client code: using System; using System.Net; using System.Net.Security; using System.ServiceModel; public class ServiceClientWrapper<TService, TChannel> : IDisposable     where TService : ClientBase<TChannel>     where TChannel : class {     private readonly WebServiceConfig _config;     private TService _serviceClient;     public ServiceClientWrapper(WebServiceConfig config)     {         this._config = config;     }     public TService CreateServiceClient()     {         this.DisposeExistingServiceClientIfRequired();         if (this._config.IgnoreSslErrors)         {             ServicePointManager.ServerCertificateValidationCallback =                 (obj, certificate, chain, errors) => true;         }         else         {             ServicePointManager.ServerCertificateValidationCallback =                 (obj, certificate, chain, errors) => errors == SslPolicyErrors.None;         }         this._serviceClient = (TService)Activator.CreateInstance(             typeof(TService),             this._config.Binding,             this._config.Endpoint);         if (this._config.ClientCertificate != null)         {             this._serviceClient.ClientCredentials.ClientCertificate.Certificate =                 this._config.ClientCertificate;         }         return this._serviceClient;     }     public void Dispose()     {         this.DisposeExistingServiceClientIfRequired();     }     private void DisposeExistingServiceClientIfRequired()     {         if (this._serviceClient != null)         {             try             {                 if (this._serviceClient.State == CommunicationState.Faulted)                 {                     this._serviceClient.Abort();                 }                 else                 {                     this._serviceClient.Close();                 }             }             catch             {                 this._serviceClient.Abort();             }             this._serviceClient = null;         }     } } A client for a particular service can then be created something like this: public class ManagementServiceClientWrapper :     ServiceClientWrapper<ManagementServiceClient, IManagementService> {     public ManagementServiceClientWrapper(WebServiceConfig config)         : base(config)     {     } } ...where ManagementServiceClient is the auto-generated client class, and the IManagementService is the auto-generated WCF channel class - and used like this: using(var serviceClientWrapper = new ManagementServiceClientWrapper(config)) {     serviceClientWrapper.CreateServiceClient().CallService(); } The underlying WCF client created by the CreateServiceClient() will be disposed after the using, and hey presto - a disposable WCF service client.

    Read the article

  • Ubuntu USB boot failure

    - by Steve
    When trying to boot from a boot USB drive I got the message, "Vesamenu.c32:Not a com32r image." I was trying to boot a fairly new Toshiba laptop with a Ubuntu 10.04.2 LTS created USB boot. I re-created the USB drive with 11.04 and it booted fine. These were both 32 bit versions even though the laptop is a 64 bit. I was trying to create a generic boot USB that would work on everything I might try it on. What is the consensus on this idea? Any solution to the above error? Thanks from a noobe.

    Read the article

  • Culture Shmulture?

    - by steve.diamond
    I've been thinking about "Customer Experience Management" lately. Here at Oracle, we arguably have the most complete suite of applications for managing the customer experience across and in the context of multiple channels -- from marketing to loyalty to contact center to self-service to analytics offerings, and more. And stay tuned, because in coming months let's just say we'll have even more to talk about on this front. But that said............ Last weekend my wife and I stayed at one of the premiere hotel chains on the planet. I won't name them, but we all know the short list. It could have been the St. Regis or the Ritz Carlton or Four Seasons or Hyatt Park or....This stay, at this particular hotel, was simply outstanding. Within a chain known for providing "above and beyond" levels of service, this particular hotel, under this particular manager, exceeded expectations on so many fronts. For example, at the Spa we mentioned to the two attendants that my wife is seven months pregnant and that we had previously had a lot of trouble conceiving. We then went to our room. Ten minutes later we heard a knock at the door and received a plate of chocolate covered strawberries with a heartfelt note and an inspiring quote, signed by the two spa attendees. The following day we arranged to have a bellhop drive us to the beach. Although they had a pre-arranged beach shuttle service with time limits, etc., he greeted us by saying, "I'm yours for the day until 4 p.m. Whatever you want to do is fine by me, as long as it's legal!" The morning that we left we arranged to have a taxi drive us to the airport--a nearly 40 mile drive. What showed up was a private coach complete with navy blue suited driver dude. And we were charged the taxi fare price. And there were many other awesome exchanges I won't mention here, although I did email the GM of this hotel two nights ago and expressed our effusive praise and gratitude. I'd submit that this hotel chain would have a definitive advantage using even more Oracle software to manage and optimize its customer interactions (yes, they are a customer). But WITHOUT the culture--that management team--and that instillation of aligned values across all employees of exemplifying 'the golden rule,' I wonder how much technology really matters in providing a distinctively positive and memorable customer experience. Lest you think I'm alone in these pontifications, have you read Paul Greenberg's blog lately? Have you seen one of his most recent posts? Now this SPECIFIC post is NOT about customer service per se. But it is about people. So yes, please think long and hard about the technology you seek to deploy. But never forget who will be interacting with your systems, and your customers.

    Read the article

  • C# 4 Named Parameters for Overload Resolution

    - by Steve Michelotti
    C# 4 is getting a new feature called named parameters. Although this is a stand-alone feature, it is often used in conjunction with optional parameters. Last week when I was giving a presentation on C# 4, I got a question on a scenario regarding overload resolution that I had not considered before which yielded interesting results. Before I describe the scenario, a little background first. Named parameters is a well documented feature that works like this: suppose you have a method defined like this: 1: void DoWork(int num, string message = "Hello") 2: { 3: Console.WriteLine("Inside DoWork() - num: {0}, message: {1}", num, message); 4: } This enables you to call the method with any of these: 1: DoWork(21); 2: DoWork(num: 21); 3: DoWork(21, "abc"); 4: DoWork(num: 21, message: "abc"); and the corresponding results will be: Inside DoWork() - num: 21, message: Hello Inside DoWork() - num: 21, message: Hello Inside DoWork() - num: 21, message: abc Inside DoWork() - num: 21, message: abc This is all pretty straight forward and well-documented. What is slightly more interesting is how resolution is handled with method overloads. Suppose we had a second overload for DoWork() that looked like this: 1: void DoWork(object num) 2: { 3: Console.WriteLine("Inside second overload: " + num); 4: } The first rule applied for method overload resolution in this case is that it looks for the most strongly-type match first.  Hence, since the second overload has System.Object as the parameter rather than Int32, this second overload will never be called for any of the 4 method calls above.  But suppose the method overload looked like this: 1: void DoWork(int num) 2: { 3: Console.WriteLine("Inside second overload: " + num); 4: } In this case, both overloads have the first parameter as Int32 so they both fulfill the first rule equally.  In this case the overload with the optional parameters will be ignored if the parameters are not specified. Therefore, the same 4 method calls from above would result in: Inside second overload: 21 Inside second overload: 21 Inside DoWork() - num: 21, message: abc Inside DoWork() - num: 21, message: abc Even all this is pretty well documented. However, we can now consider the very interesting scenario I was presented with. The question was what happens if you change the parameter name in one of the overloads.  For example, what happens if you change the parameter *name* for the second overload like this: 1: void DoWork(int num2) 2: { 3: Console.WriteLine("Inside second overload: " + num2); 4: } In this case, the first 2 method calls will yield *different* results: 1: DoWork(21); 2: DoWork(num: 21); results in: Inside second overload: 21 Inside DoWork() - num: 21, message: Hello We know the first method call will go to the second overload because of normal method overload resolution rules which ignore the optional parameters.  But for the second call, even though all the same rules apply, the compiler will allow you to specify a named parameter which, in effect, overrides the typical rules and directs the call to the first overload. Keep in mind this would only work if the method overloads had different parameter names for the same types (which in itself is weird). But it is a situation I had not considered before and it is one in which you should be aware of the rules that the C# 4 compiler applies.

    Read the article

  • Fixing Chrome&rsquo;s AJAX Request Caching Bug

    - by Steve Wilkes
    I recently had to make a set of web pages restore their state when the user arrived on them after clicking the browser’s back button. The pages in question had various content loaded in response to user actions, which meant I had to manually get them back into a valid state after the page loaded. I got hold of the page’s data in a JavaScript ViewModel using a JQuery ajax call, then iterated over the properties, filling in the fields as I went. I built in the ability to describe dependencies between inputs to make sure fields were filled in in the correct order and at the correct time, and that all worked nicely. To make sure the browser didn’t cache the AJAX call results I used the JQuery’s cache: false option, and ASP.NET MVC’s OutputCache attribute for good measure. That all worked perfectly… except in Chrome. Chrome insisted on retrieving the data from its cache. cache: false adds a random query string parameter to make the browser think it’s a unique request – it made no difference. I made the AJAX call a POST – it made no difference. Eventually what I had to do was add a random token to the URL (not the query string) and use MVC routing to deliver the request to the correct action. The project had a single Controller for all AJAX requests, so this route: routes.MapRoute( name: "NonCachedAjaxActions", url: "AjaxCalls/{cacheDisablingToken}/{action}", defaults: new { controller = "AjaxCalls" }, constraints: new { cacheDisablingToken = "[0-9]+" }); …and this amendment to the ajax call: function loadPageData(url) { // Insert a timestamp before the URL's action segment: var indexOfFinalUrlSeparator = url.lastIndexOf("/"); var uniqueUrl = url.substring(0, indexOfFinalUrlSeparator) + new Date().getTime() + "/" + url.substring(indexOfFinalUrlSeparator); // Call the now-unique action URL: $.ajax(uniqueUrl, { cache: false, success: completePageDataLoad }); } …did the trick.

    Read the article

  • Part 1 - 12c Database and WLS - Overview

    - by Steve Felts
    The download of Oracle 12c database became available on June 25, 2013.  There are some big new features in 12c database and WebLogic Server will take advantage of them. Immediately, we will support using 12c database and drivers with WLS 10.3.6 and 12.1.1.  When the next version of WLS ships, additional functionality will be supported (those rows in the table below with all "No" values will get a "Yes).  The following table maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases. Feature WebLogic Server 10.3.6/12.1.1 with 11g drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 11g drivers and 12c DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 11gR2 DB WebLogic Server 10.3.6/12.1.1 with 12c drivers and 12c DB JDBC replay No No No Yes (Active GridLink only in 10.3.6, add generic in 12.1.1) Multi Tenant Database No Yes (except set container) No Yes (except set container) Dynamic switching between Tenants No No No No Database Resident Connection pooling (DRCP) No No No No Oracle Notification Service (ONS) auto configuration No No No No Global Database Services (GDS) No Yes (Active GridLink only) No Yes (Active GridLink only) JDBC 4.1 (using ojdbc7.jar files & JDK 7) No No Yes Yes  The My Oracle Support (MOS) document covering this is "WebLogic Server 12.1.1 and 10.3.6 Support for Oracle 12c Database [ID 1564509.1]" at the link https://support.oracle.com/epmos/faces/DocumentDisplay?id=1564509.1. The following documents are also key references:12c Oracle Database Developer Guide http://docs.oracle.com/cd/E16655_01/appdev.121/e17620/toc.htm 12c Oracle Database Administrator's Guide http://docs.oracle.com/cd/E16655_01/server.121/e17636/toc.htm . I plan to write some related blog articles not to duplicate existing product documentation but to introduce the features, provide some examples, and tie together some information to make it easier to understand. How do you get started with 12c?  The easiest way is to point your data source at a 12c database.  The only change on the WLS side is to update the URL in your data source (assuming that you are not just upgrading your database).  You can continue to use the 11.2.0.3 driver jar files that shipped with WLS 10.3.6 or 12.1.1.  You shouldn't see any changes in your application.  You can take advantage of enhancements on the database side that don't affect the mid-tier.  On the WLS side, you can take advantage of using Global Data Service or connecting to a tenant in a multi-tenant database transparently. If you want to use the 12c client jar files, it's a bit of work because they aren't shipped with WLS and you can't just drop in ojdbc6.jar as in the old days.  You need to use a matched set of jar files and they need to come before existing jar files in the CLASSPATH.  The MOS article is written from the standpoint that you need to get the jar files directly - download almost 1G and install over 600M footprint to get 15 jar files.  Assuming that you have the database installed and you can get access to the installation (or ask the DBA), you need to copy the 15 jar files to each machine with a WLS installation and get them in your CLASSPATH.  You can play with setting the PRE_CLASSPATH but the more practical approach may be to just update WL_HOME/common/bin/commEnv.sh directly.  There's a change in the transaction completion behavior (read the MOS) so if you think you might run into that, you will want to set -Doracle.jdbc.autoCommitSpecCompliant=false.  Also if you are running with Active GridLink, you must set -Doracle.ucp.PreWLS1212Compatible=true (how's that for telling you that this is fixed in WLS 12.1.2).  Once you get the configuration out of the way, you can start using the new ojdbc7.jar in place of the ojdbc6.jar to get the new JDBC 4.1 API's.  You can also start using Application Continuity.  This feature is also known as JDBC Replay because when a connection fails you get a new one with all JDBC operations up to the failure point automatically replayed.  As you might expect, there are some limitations but it's an interesting feature.  Obviously I'm going to focus on the 12c database features that we can leverage in WLS data source.  You will need to read other sources or the product documentation to get all of the new features.

    Read the article

  • How can I start the right way from the beginning in learning web development?

    - by Steve
    Well, I know I have to learn many things such as HTML, JavaScript, CSS, PHP, ASP.NET, SQL, etc. However, I don't know if I start, for example, learning ASP.NET before I learn HTML and CSS then would I say in the near future that it was better for me if i start learning another thing earlier so I don't need to come back and learn it now! You guys, who have the experience in web development, know after you have reached what you are now how should the right start be! So, can you tell me how?

    Read the article

  • Andrejus Keeps Cranking Out the Tips

    - by steve.muench
    I've been working on designing and implementing a new set of features for an upcoming Oracle ADF release for the past several months (and several more to come) and this has put a real limit on the time I have available for helping external customers and internal teams, as well as the time I have for blogging. Thankfully, the ADF community at large has many other voices they can listen to for tips and techniques. One I wanted to highlight specifically today is Andrejus Baranovskis' blog, which is steadily building up a high-quality archive of downloadable ADF examples based on his real-world consulting experiences, each with a corresponding explanatory article. If you're not already subscribed to Andrejus' blog, I highly recommend it. Keep up the great work, Andrejus!

    Read the article

  • MVC Portable Areas &ndash; Deploying Static Files

    - by Steve Michelotti
    This is the second post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources As I’ve been digging more into portable areas, one of the things I’ve liked best is the deployment story which enables my *.aspx, *.ascx pages to be compiled into the assembly as embedded resources rather than having to maintain all those files separately. In traditional web forms, that was always the thing to prevented developers from utilizing *.ascx user controls across projects (see this post for using portable areas in web forms).  However, though the aspx pages are embedded, the supporting static files (e.g., images, css, javascript) are *not*. Most of the demos available online today tend to brush over this issue and focus solely on the aspx side of things. But to create truly robust portable areas, it’s important to have a good story for these supporting files as well.  I’ve been working with two different approaches so far (of course I’d really like to hear if other people are using alternatives). Scenario For the approaches below, the scenario really isn’t that important. It could be something as trivial as this partial view: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> 2: <img src="<%: Url.Content("~/images/arrow.gif") %>" /> Hello World! The point is that there needs to be careful consideration for *any* scenario that links to an external file such as an image, *.css, *.js, etc. In the example shown above, it uses the Url.Content() method to convert to a relative path. But this method won’t necessary work depending on how you deploy your portable area. One approach to address this issue is to build your portable area project with MSDeploy/WebDeploy so that it is packaged properly before incorporating into the host application. All of the *.cs files are removed and the project is ready for xcopy deployment – however, I do *not* need the “Views” folder since all of the mark up has been compiled into the assembly as embedded resources. Now in the host application we create a folder called “Modules” and deploy any portable areas as sub-folders under that: At this point we can add a simple assembly reference to the Widget1.dll sitting in the Modules\Widget1\bin folder. I can now render the portable image in my view like any other portable area. However, the problem with that is that the view results in this:   It couldn’t find arrow.gif because it looked for /images/arrow.gif and it was *actually* located at /images/Modules/Widget1/images/arrow.gif. One solution is to make the physical location of the portable configurable from the perspective of the host like this: 1: <appSettings> 2: <add key="Widget1" value="Modules\Widget1"/> 3: </appSettings> Using the <appSettings> section is a little cheesy but it could be better formalized into its own section. In fact, if were you willing to rely on conventions (e.g., “Modules\{areaName}”) then then config could be eliminated completely. With this config in place, we could create our own Html helper method called Url.AreaContent() that “wraps” the OOTB Url.Content() method while simply pre-pending the area location path: 1: public static string AreaContent(this UrlHelper urlHelper, string contentPath) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: var areaPath = (string)ConfigurationManager.AppSettings[areaName]; 5:   6: return urlHelper.Content("~/" + areaPath + "/" + contentPath); With these two items in place, we just change our Url.Content() call to Url.AreaContent() like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! and the arrow.gif now renders correctly:     Since we’re just using our own Url.AreaContent() rather than the built-in Url.Content(), this solution works for images, *.css, *.js, or any externally referenced files.  Additionally, any images referenced inside a css file will work provided it’s a relative reference and not an absolute reference. An alternative to this approach is to build the static file into the assembly as embedded resources themselves. I’ll explore this in another post (linked at the top).

    Read the article

  • how to set different wallpapers in ubuntu workspaces

    - by Steve
    I'm having an issues trying to customize ubuntu workspaces in the gnome environment. Assuming the default four workspaces aka desktops, how can one have a different wallpaper for each one? When I go to an individual workspace to set its wallpaper, all of the workspaces use it. So if I set: wallpaper B on workspace 2 wallpaper C on workspace 3 What will happen is that all the workspaces will default to the last wallpaper set no matter which workspace it was set in. What's even weirder is that the very first wallpaper set upon using it for the very first time is what shows up when i call up the Workspaces tool. Even though once I settle upon a workspace, no matter which one, the original wallpaper disappears and the last wallpaper set is the one that always shows up.

    Read the article

  • Traffic estimation for a multiplayer flash game

    - by Steve Addington
    hey, i want to know if my rough traffic estimations are right, it would be for a pretty simple realtime flashgame in the style of haxball (but not as a soccer game) heres a video of it http://www.youtube.com/watch?v=z_xBdFg1RcI So here comes my estimation, i dont know if they are realistic! i hope someone can help me. consider the packet attached as a typical one sent every 200ms, its 148bytes + 64 bytes of header will make around a 200bytes packet. The server will receive 200bytes x 6 players x 5 times a sec=6000bytes/s=5.85Kbytes/s=46.9kbit/s plus he has to send all back to the players, so at this point are 94Kbit/s.The server received all the information, perform the definitive calculation and send the new position to all players, in a bigger packet of around 900bytes that have to be delivered to the others 6, which makes 900bytes x 6 players x 5 times a sec=27000bytes/s=26Kbytes/s=210kbit/s. overall that would be 26kbyte per second. thats like 130mb traffic per hour for a 6player room. but somehow i think the numbers are too high? that would be really much traffic for such a simple game. did i calculate something wrong?

    Read the article

  • Kendo UI Mobile with Knockout for Master-Detail Views

    - by Steve Michelotti
    Lately I’ve been playing with Kendo UI Mobile to build iPhone apps. It’s similar to jQuery Mobile in that they are both HTML5/JavaScript based frameworks for buildings mobile apps. The primary thing that drew me to investigate Kendo UI was its innate ability to adaptively render a native looking app based on detecting the device it’s currently running on. In other words, it will render to look like a native iPhone app if it’s running on an iPhone and it will render to look like a native Droid app if it’s running on a Droid. This is in contrast to jQuery Mobile which looks the same on all devices and, therefore, it can never quite look native for whatever device it’s running on. My first impressions of Kendo UI were great. Using HTML5 data-* attributes to define “roles” for UI elements is easy, the rendering looked great, and the basic navigation was simple and intuitive. However, I ran into major confusion when trying to figure out how to “correctly” build master-detail views. Since I was already very family with KnockoutJS, I set out to use that framework in conjunction with Kendo UI Mobile to build the following simple scenario: I wanted to have a simple “Task Manager” application where my first screen just showed a list of tasks like this:   Then clicking on a specific task would navigate to a detail screen that would show all details of the specific task that was selected:   Basic navigation between views in Kendo UI is simple. The href of an <a> tag just needs to specify a hash tag followed by the ID of the view to navigate to as shown in this jsFiddle (notice the href of the <a> tag matches the id of the second view):   Direct link to jsFiddle: here. That is all well and good but the problem I encountered was: how to pass data between the views? Specifically, I need the detail view to display all the details of whichever task was selected. If I was doing this with my typical technique with KnockoutJS, I know exactly what I would do. First I would create a view model that had my collection of tasks and a property for the currently selected task like this: 1: function ViewModel() { 2: var self = this; 3: self.tasks = ko.observableArray(data); 4: self.selectedTask = ko.observable(null); 5: } Then I would bind my list of tasks to the unordered list - I would attach a “click” handler to each item (each <li> in the unordered list) so that it would select the “selectedTask” for the view model. The problem I found is this approach simply wouldn’t work for Kendo UI Mobile. It completely ignored the click handlers that I was trying to attach to the <a> tags – it just wanted to look at the href (at least that’s what I observed). But if I can’t intercept this, then *how* can I pass data or any context to the next view? The only thing I was able to find in the Kendo documentation is that you can pass query string arguments on the view name you’re specifying in the href. This enabled me to do the following: Specify the task ID in each href – something like this: <a href=”#taskDetail?id=3></a> Attach an “init method” (via the “data-show” attribute on the details view) that runs whenever the view is activated Inside this “init method”, grab the task ID passed from the query string to look up the item from my view model’s list of tasks in order to set the selected task I was able to get all that working with about 20 lines of JavaScript as shown in this jsFiddle. If you click on the Results tab, you can navigate between views and see the the detail screen is correctly binding to the selected item:   Direct link to jsFiddle: here.   With all that being done, I was very happy to get it working with the behavior I wanted. However, I have no idea if that is the “correct” way to do it or if there is a “better” way to do it. I know that Kendo UI comes with its own data binding framework but my preference is to be able to use (the well-documented) KnockoutJS since I’m already familiar with that framework rather than having to learn yet another new framework. While I think my solution above is probably “acceptable”, there are still a couple of things that bug me about it. First, it seems odd that I have to loop through my items to *find* my selected item based on the ID that was passed on the query string - normally, with Knockout I can just refer directly to my selected item from where it was used. Second, it didn’t feel exactly right that I had to rely on the “data-show” method of the details view to set my context – normally with Knockout, I could just attach a click handler to the <a> tag that was actually clicked by the user in order to set the “selected item.” I’m not sure if I’m being too picky. I know there are many people that have *way* more expertise in Kendo UI compared to me – I’d be curious to know if there are better ways to achieve the same results.

    Read the article

  • Teach Your Kid to Code (&hellip;and Vote early!)

    - by Steve Michelotti
    Next Tuesday I will be at the CMAP main meeting presenting Teach Your Kid to Code. Next Tuesday is of course Election Day so you have to make sure you vote early in order to get over to CMAP for the 7:00PM presentation. I will be co-presenting this talk with my 5th grade son. Here is the abstract: Have you ever wanted a way to teach your kid to code? For that matter, have you ever wanted to simply be able to explain to your kid what you do for a living? Putting things in a context that a kid can understand is not as easy as it sounds. If you are someone curious about these concepts, this is a “can’t miss” presentation that will be co-presented by Justin Michelotti (5th grader) and his father. Bring your kid with you to CMAP for this fun and educational session. We will show tools you may not have been aware of like SmallBasic and Kodu – we’ll even throw in a little Visual Studio and Windows 8! Concepts such as variables, conditionals, loops, and functions will be covered while we introduce object oriented concepts without any of the confusing words. Kids are not required for entry! I promise this will be an entertaining presentation! We hope to see you (and your kids) there. Click here for details.

    Read the article

  • Oracle ERP Cloud Solution Defines Revenue Recognition Software Market

    - by Steve Dalton
    Normal 0 false false false EN-US X-NONE X-NONE Revenue is a fundamental yardstick of a company's performance, and one of the most important metrics for investors in the capital markets. So it’s no surprise that the accounting standard boards have devoted significant resources to this topic, with a key goal of ensuring that companies use a consistent method of recognizing revenue. Due to the myriad of revenue-generating transactions, and the divergent ways organizations recognize revenue today, the IFRS and FASB have been working for 12 years on a common set of accounting standards that apply to all industries in virtually all countries. Through their joint efforts on May 28, 2014 the FASB and IFRS released the IFRS 15 / ASU 2014-9 (Revenue from Contracts with Customers) converged accounting standard. This standard applies to revenue in all public companies, but heavily impacts organizations in any industry that might have complex sales contracts with multiple distinct deliverables (obligations). For example, an auto dealer who bundles free service with the sale of a car can only recognize the service revenue once the owner of the car brings it in for work. Similarly, high-tech companies that bundle software licenses, consulting, and support services on a sales contract will recognize bundled service revenue once the services are delivered. Now all companies need to review their revenue for hidden bundling and implicit obligations. Numerous time-consuming and judgmental activities must be performed to properly recognize revenue for complex sales contracts. To illustrate, after the contract is identified, organizations must identify and examine the distinct deliverables, determine the estimated selling price (ESP) for each deliverable, then allocate the total contract price to each deliverable based on the ESPs. In terms of accounting, organizations must determine whether the goods or services have been delivered or performed to the customer’s satisfaction, then either book revenue in the current period or record a liability for the obligation if revenue will be recognized in a future accounting period. Oracle Revenue Management Cloud was architected and developed so organizations can simplify and streamline revenue recognition. Among other capabilities, the solution uses business rules to efficiently identify and examine contracts, intelligently calculate and allocate deliverable prices based on prescribed inputs, and accurately recognize revenue for each deliverable based on customer satisfaction. "Oracle works very closely with our customers, the Big 4 accounting firms, and the accounting standard boards to deliver an adaptive, comprehensive, new generation revenue recognition solution,” said Rondy Ng, Senior Vice President, Applications Development. “With the recently announced IFRS 15 / ASU 2014-9, Oracle is ready to support customer adoption of the new standard with our Revenue Management Cloud,” said Rondy. Oracle Revenue Management Cloud, an integral part of Oracle Financials Cloud, helps organizations comply with accounting standards, provides them with confidence that reported revenue is materially accurate, and simplifies the accounting process for revenue recognition. Stay tuned to this blog for regular updates on Oracle Revenue Management Cloud. We also invite you to review our new oracle.com ERP pages @ oracle.com/erp. We will be updating these pages very soon with more information about Oracle Revenue Management Cloud.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >