Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 227/679 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Using Asset Groups

    - by Owen Allen
    I got a question about putting assets in groups: "I'm planning on installing some agents manually on existing systems, and I want to have them put in a specific asset group once they're discovered. I don't see any way to tell the install script to put the asset in a group. How can I add the assets to a group, either through the UI or the CLI?" There are a few ways. In the CLI, you can use groups mode, and use this command to add an asset to a group: attach -n| --gear <asset name> -g| --group <group> You can also use -U| --uuid <UUID> to specify the asset if you have multiple assets with the same name. In the UI, you have a couple of options. You can select an asset and click Add Asset to Group to add it to a group you select. Alternatively, if you're trying to make a group for assets with a specific characteristic, you can specify rules that will automatically add assets to a group based on that characteristic.

    Read the article

  • A Case for Oracle Fusion Middleware by Lucas Jellema

    - by JuergenKress
    An in-depth look at the interaction of people, processes, and technologies in the transition to a service-oriented architecture. Author's Note This article presents a profile of a fictitious organization, NOPERU. The story of NOPERU as told in this article is actually a collage of the events at some dozen organizations that I have been involved with over the past few years. None of these organizations sport all the characteristics of NOPERU - but all of them have gone through or are going through a similar transition as described here and all aspects of this article were taken from real life at one or usually many of these organizations. Background NOPERU (National Organization for Permits for Emissions and Resource Usage) is a public organization that continues to transform in terms of its business, organization and technology. Changing business requirements; new interaction channels; and increasing demands for more flexibility, faster throughput and lower costs drive these transformations, while technological evolution and new architecture patterns enable the change. NOPERU chose Oracle Fusion Middleware as the technology platform to implement the new architecture and required applications. This article takes a close look at NOPERU's journey from its origins in the early 1990s as a largely paper-based entity with regional databases and client-server Oracle Forms applications. Its upcoming business objectives are introduced: what is required of the organization and what the higher goals behind these requirements are. The architecture roadmap is described at a high level as well as drilled down to a service oriented design. Based on the architecture roadmap and the business requirements and NOPERU went through a technology selection to determine the technology stack with which the future would be realized in terms of IT. The article discusses that selection and details the projects subsequently planned (and executed to date). The new architecture and technology as well as the introduction of an Agile development method have had substantial consequences for the IT organization, the processes and individual staff members. The approach NOPERU has adopted with regard to the people and the organization is portrayed. Finally, the article discusses many conclusions that NOPERU has drawn that may benefit itself and other organizations. Introducing NOPERU NOPERU is a national organization charged with issuing permits for excessive emissions (i.e., carbon dioxide) and disproportionate usage of such resources as energy or water. Anyone-whether a commercial enterprise, government agency or private person--who emits or consumes more than what is considered "fair usage" requires such a permit. When someone builds an outdoor heated swimming pool, for example, or open-air terrace heating, such a permit needs to be obtained. When a company installs new, energy-intensive equipment, such as water boilers or deep freezers, it too needs to get a NOPERU permit. Government-sponsored projects at every level that involve consumption of large quantities of fresh water or production of high volumes of emissions must turn to NOPERU for a permit. Without the required license, any interested party can get a court to immediately put a stop to the disputed activity. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Lucas Jellema,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Oracle VM Templates Enable Oracle Real Application Clusters Deployment Up to 10 Times Faster than VMware vSphere

    - by Angela Poth
    Oracle VM - Quantifying the Value of Application-Driven DeploymentThe recently published report by the Evaluator Group, “Oracle VM – Quantifying The Value of Application-Driven Virtualization,“ validates Oracle’s ease of use and time savings over VMware vSphere 5. The report found that users can deploy Oracle Real Application Clusters up to 10 times faster with Oracle VM templates than VMware vSphere. Using Oracle VM templates users can deploy E-Business Suites nearly 7 times faster than VMware vSphere."Oracle VM’s application-driven architecture was built to enable rapid deployment of enterprise applications, with simplified integrated lifecycle management, to fully support Oracle applications. Our testing has demonstrated 90 percent improvement deploying Oracle Real Application Clusters 11g R2 using Oracle VM Templates and a similar improvement of 85 percent for the Oracle E-Business Suite 12.1.1. This clearly shows that Templates can provide real value to customers and should become an essential component for enabling rapid enterprise application deployment and management," said Leah Schoeb, Senior Partner, Evaluator Group.Read the Press Release Get a copy of the Evaluator Group Report: Oracle VM – Quantifying The Value of Application-Driven Virtualization

    Read the article

  • Webinar, June 27: Application Intelligence and Connected Devices

    - by terrencebarr
    Oracle and Beecham have recently conducted a market survey on use of Connected Devices for M2M & Internet of Things (IoT) applications and new trends. On June 27, 9 am ET the first session in this webinar series addresses intelligence in connected devices. Join Peter Utzschneider from Oracle and Robin Duke-Woolley of Beecham Research as they discuss the findings from this survey and the implications for the M2M & IoT connected devices market: What are the key business drivers of your connected devices program? To what extent do you expect the intelligence required for M2M & IoT applications to change? Would these changes occur at the network edge, at the data center, or both? What are the impacts of these changes on ISV’s and device manufacturers? What are the opportunities for other M2M & IoT players? To attend, please register for free or click on the image. Cheers, – Terrence Filed under: Embedded Tagged: Connected, devices, iot, Java Embedded, Java ME Embedded, M2M, webinar

    Read the article

  • Oracle CRM is ready for the Apple iPad!

    - by divya.malik
    Here is some exciting news to report from the Oracle headquarters today. For all you Apple and Oracle CRM fans, we just announced Oracle CRM support for the Apple iPad. This is great news for anyone seeking richer CRM user experience with the Apple iPad. Oracle’s Siebel CRM can support a rich graphical user interface on Apple’s iPad using the recently released Oracle’s server –based REST ( Representational State Transfer and is a simple way of providing APIs over HTTP) interface and get access to the Siebel metadata. In the words of SVP, Anthony Lye “Siebel CRM support for the Apple iPad is yet another example of Oracle’s dedication to give customers the cutting-edge CRM options on the latest devices so they can grow their business and increase productivity.” For more details on this integration, please read the press release Here is a demo created by Oracle CRM Principal Product Manager, Raj Aggarwal

    Read the article

  • Neuste Version zum Download: Apex 4.2 ist da!

    - by britta wolf
    Seit dem 12. Oktober 2012 steht APEX 4.2 zum Download bereit. Schnell installieren..... und dann gleich die neuen Features ausprobieren, z.B. das einfache, deklarative Erstellen von APEX-Anwendungen für mobile Endgeräte oder HTML5-Diagramme. Aber auch darüber hinaus gibt es weitere Neuerungen.: so wurde zum Beispiel der Excel-Upload für den Endanwender verbessert. Ausserdem kann man nun 200 (anstelle von 100) Elemente auf eine Seite setzen.

    Read the article

  • REMINDER: ATG Live Webcast Nov. 15: Best Practices for Using EBS SDK for Java with Oracle ADF

    - by Bill Sawyer
    Thursday, November 15th is your chance to join Sara Woodhull and Juan Camilo Ruiz as they discuss  Best Practices for Using EBS SDK for Java with Oracle ADF. You can find the complete event details at ATG Live Webcast: Best Practices for Using EBS SDK for Java with Oracle ADF Date:               Thursday, November 15, 2012Time:              8:00 AM - 9:00 AM Pacific Standard TimePresenters:   Sara Woodhull, Principal Product Manager, E-Business Suite ATG                         Juan Camilo Ruiz, Principal Product Manager, ADF Webcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:    Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103192To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  591862924 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Workaround: XNA 4 importing only part of 3d model from FBX

    - by Vitus
    Recently I found a problem with importing 3D models from FBX files: it sometimes imported partly. That is when you draw a 3D model, loaded from FBX file, processed by content pipeline, you got only part of meshes. “Sometimes” means that you got this error only for some files. Results of my research below. For example, I have 10Mb binary FBX file with a model, looks like: And when I load it, result Model instance contains only part of meshes and looks like: Because models from other files imported normally, I think that it’s a “bad format” file. When you add FBX file to your XNA Content project and build it, imported file processing by XNA Fbx Importer & Processor. On MSDN I found that FbxImporter designed to work with 2006.11 version of FBX format. My file is FBX 2012 format. Ok, I need to convert it to 2006 format. It can be done by using Autodesk FBX Converter 2012.1. I tried to convert it to other versions of FBX formats, but without success. And I also tried to import my FBX file to 3D MAX, and it imported correctly. Then I export model using 3D MAX, and it generate me other FBX, which I add to my XNA project. After that I got full model, that rendered well! So, internal data structure of FBX file is more important for right XNA import, than it version! Unfortunately, Autodesk FBX is not an open file format. If you want to work with FBX, you should use Autodesk FBX SDK. This way you can manually read content of FBX file, and use it everyway. Then I tried to convert my source FBX file to DAE Collada, and result DAE file back to FBX, using FBX Converter (FBX –> DAE –> FBX). The result FBX file can be imported normally.   Conclusion: XNA FbxImporter correct work doesn't depend on version (2006, 2011, etc) and form (binary, ascii) of FBX file. Internal FBX data structure much more important. To make FBX "readable" for XNA Importer you can use double conversion like FBX -> Collada -> FBX You also can use FBX SDK to manually load data from FBX P.S. Autodesk FBX Converter 2012 is more, than simple converter. It provide you tools like: FBX Explorer, which show you structure of FBX file; FBX Viewer, which render content of FBX and provide basic intercation like model move and zoom; FBX Take Manager, which allow to work with embedded animations

    Read the article

  • Rock Stars and now OPN All-Stars? Bring it.

    - by sandra.haan
    We are talking everything OPN All-Star - from home-court advantage to taking too many shots across a wide variety of industries, skill sets, focus areas, broad solution sets, applications and technologies. As a Platinum Partner, Intelenex levels of quality specialization range from ERP/EBS, CRM, AIA to Hyperion. Slam dunk! This is what gives Intelenex a well deserved star studded "baller" celebrity status like the LA Lakers very own Kobe Bryant. While Intelenex has been busy multi-specializing and taking names, Tyler Prince, group vp, North America Sales tells us a little bit about the value OPN's overall strategy brings to the table. This exclusive partnership allows OPN Specialized partners to provide customers with a solution that helps them adapt swiftly to new expansion conditions and changes. Namely, partners can pick an area to focus and can leverage that focus and competency to differentiate from the competition. You will be so HOT on the OPN court the Miami Heat will have nothing on you. Watch out, Lebron. Additionally, this specialization in products or set of products is recognized by the entire Oracle sales force, which is vital to all partners, but most importantly your end-customers. You will be so stylishly famous your cheerleader squad will not be able to steal the spotlight from you. Are you really All-Star worthy this season? Jump in and join Tyler's halftime report on OPN's All-Star program in this VAR Guy FastChat video to find out: Now that's what we call some March Madness - Good selling, The OPN Communications Team

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • Access denied error when adding new server to existing SharePoint 2007 Farm

    - by Kelly Jones
    I recently got an error when adding a new SharePoint 2007 (SP2) server to our existing MOSS farm.  I had run the installation fine, and was walking through the SharePoint Configuration Wizard, when I got an error on step 2: “Resource retrieved id PostSetupConfigurationFailedEventLog is Configuration of SharePoint Products and Technologies failed.” I searched the net, but didn’t really find anything. I then remembered that I had forgotten to run the latest SharePoint updates (for us, the latest applied was December 2009).  I installed the WSS update, and then the MOSS update and both worked fine.  I then ran the Configuration wizard again and it worked without any errors.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • ASPNET WebAPI REST Guidance

    - by JoshReuben
    ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. While I may be more partial to NodeJS these days, there is no denying that WebAPI is a well engineered framework. What follows is my investigation of how to leverage WebAPI to construct a RESTful frontend API.   The Advantages of REST Methodology over SOAP Simpler API for CRUD ops Standardize Development methodology - consistent and intuitive Standards based à client interop Wide industry adoption, Ease of use à easy to add new devs Avoid service method signature blowout Smaller payloads than SOAP Stateless à no session data means multi-tenant scalability Cache-ability Testability   General RESTful API Design Overview · utilize HTTP Protocol - Usage of HTTP methods for CRUD, standard HTTP response codes, common HTTP headers and Mime Types · Resources are mapped to URLs, actions are mapped to verbs and the rest goes in the headers. · keep the API semantic, resource-centric – A RESTful, resource-oriented service exposes a URI for every piece of data the client might want to operate on. A REST-RPC Hybrid exposes a URI for every operation the client might perform: one URI to fetch a piece of data, a different URI to delete that same data. utilize Uri to specify CRUD op, version, language, output format: http://api.MyApp.com/{ver}/{lang}/{resource_type}/{resource_id}.{output_format}?{key&filters} · entity CRUD operations are matched to HTTP methods: · Create - POST / PUT · Read – GET - cacheable · Update – PUT · Delete - DELETE · Use Uris to represent a hierarchies - Resources in RESTful URLs are often chained · Statelessness allows for idempotency – apply an op multiple times without changing the result. POST is non-idempotent, the rest are idempotent (if DELETE flags records instead of deleting them). · Cache indication - Leverage HTTP headers to label cacheable content and indicate the permitted duration of cache · PUT vs POST - The client uses PUT when it determines which URI (Id key) the new resource should have. The client uses POST when the server determines they key. PUT takes a second param – the id. POST creates a new resource. The server assigns the URI for the new object and returns this URI as part of the response message. Note: The PUT method replaces the entire entity. That is, the client is expected to send a complete representation of the updated product. If you want to support partial updates, the PATCH method is preferred DELETE deletes a resource at a specified URI – typically takes an id param · Leverage Common HTTP Response Codes in response headers 200 OK: Success 201 Created - Used on POST request when creating a new resource. 304 Not Modified: no new data to return. 400 Bad Request: Invalid Request. 401 Unauthorized: Authentication. 403 Forbidden: Authorization 404 Not Found – entity does not exist. 406 Not Acceptable – bad params. 409 Conflict - For POST / PUT requests if the resource already exists. 500 Internal Server Error 503 Service Unavailable · Leverage uncommon HTTP Verbs to reduce payload sizes HEAD - retrieves just the resource meta-information. OPTIONS returns the actions supported for the specified resource. PATCH - partial modification of a resource. · When using PUT, POST or PATCH, send the data as a document in the body of the request. Don't use query parameters to alter state. · Utilize Headers for content negotiation, caching, authorization, throttling o Content Negotiation – choose representation (e.g. JSON or XML and version), language & compression. Signal via RequestHeader.Accept & ResponseHeader.Content-Type Accept: application/json;version=1.0 Accept-Language: en-US Accept-Charset: UTF-8 Accept-Encoding: gzip o Caching - ResponseHeader: Expires (absolute expiry time) or Cache-Control (relative expiry time) o Authorization - basic HTTP authentication uses the RequestHeader.Authorization to specify a base64 encoded string "username:password". can be used in combination with SSL/TLS (HTTPS) and leverage OAuth2 3rd party token-claims authorization. Authorization: Basic sQJlaTp5ZWFslylnaNZ= o Rate Limiting - Not currently part of HTTP so specify non-standard headers prefixed with X- in the ResponseHeader. X-RateLimit-Limit: 10000 X-RateLimit-Remaining: 9990 · HATEOAS Methodology - Hypermedia As The Engine Of Application State – leverage API as a state machine where resources are states and the transitions between states are links between resources and are included in their representation (hypermedia) – get API metadata signatures from the response Link header - in a truly REST based architecture any URL, except the initial URL, can be changed, even to other servers, without worrying about the client. · error responses - Do not just send back a 200 OK with every response. Response should consist of HTTP error status code (JQuery has automated support for this), A human readable message , A Link to a meaningful state transition , & the original data payload that was problematic. · the URIs will typically map to a server-side controller and a method name specified by the type of request method. Stuff all your calls into just four methods is not as crazy as it sounds. · Scoping - Path variables look like you’re traversing a hierarchy, and query variables look like you’re passing arguments into an algorithm · Mapping URIs to Controllers - have one controller for each resource is not a rule – can consolidate - route requests to the appropriate controller and action method · Keep URls Consistent - Sometimes it’s tempting to just shorten our URIs. not recommend this as this can cause confusion · Join Naming – for m-m entity relations there may be multiple hierarchy traversal paths · Routing – useful level of indirection for versioning, server backend mocking in development ASPNET WebAPI Considerations ASPNET WebAPI implements a lot (but not all) RESTful API design considerations as part of its infrastructure and via its coding convention. Overview When developing an API there are basically three main steps: 1. Plan out your URIs 2. Setup return values and response codes for your URIs 3. Implement a framework for your API.   Design · Leverage Models MVC folder · Repositories – support IoC for tests, abstraction · Create DTO classes – a level of indirection decouples & allows swap out · Self links can be generated using the UrlHelper · Use IQueryable to support projections across the wire · Models can support restful navigation properties – ICollection<T> · async mechanism for long running ops - return a response with a ticket – the client can then poll or be pushed the final result later. · Design for testability - Test using HttpClient , JQuery ( $.getJSON , $.each) , fiddler, browser debug. Leverage IDependencyResolver – IoC wrapper for mocking · Easy debugging - IE F12 developer tools: Network tab, Request Headers tab     Routing · HTTP request method is matched to the method name. (This rule applies only to GET, POST, PUT, and DELETE requests.) · {id}, if present, is matched to a method parameter named id. · Query parameters are matched to parameter names when possible · Done in config via Routes.MapHttpRoute – similar to MVC routing · Can alternatively: o decorate controller action methods with HttpDelete, HttpGet, HttpHead,HttpOptions, HttpPatch, HttpPost, or HttpPut., + the ActionAttribute o use AcceptVerbsAttribute to support other HTTP verbs: e.g. PATCH, HEAD o use NonActionAttribute to prevent a method from getting invoked as an action · route table Uris can support placeholders (via curly braces{}) – these can support default values and constraints, and optional values · The framework selects the first route in the route table that matches the URI. Response customization · Response code: By default, the Web API framework sets the response status code to 200 (OK). But according to the HTTP/1.1 protocol, when a POST request results in the creation of a resource, the server should reply with status 201 (Created). Non Get methods should return HttpResponseMessage · Location: When the server creates a resource, it should include the URI of the new resource in the Location header of the response. public HttpResponseMessage PostProduct(Product item) {     item = repository.Add(item);     var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);     string uri = Url.Link("DefaultApi", new { id = item.Id });     response.Headers.Location = new Uri(uri);     return response; } Validation · Decorate Models / DTOs with System.ComponentModel.DataAnnotations properties RequiredAttribute, RangeAttribute. · Check payloads using ModelState.IsValid · Under posting – leave out values in JSON payload à JSON formatter assigns a default value. Use with RequiredAttribute · Over-posting - if model has RO properties à use DTO instead of model · Can hook into pipeline by deriving from ActionFilterAttribute & overriding OnActionExecuting Config · Done in App_Start folder > WebApiConfig.cs – static Register method: HttpConfiguration param: The HttpConfiguration object contains the following members. Member Description DependencyResolver Enables dependency injection for controllers. Filters Action filters – e.g. exception filters. Formatters Media-type formatters. by default contains JsonFormatter, XmlFormatter IncludeErrorDetailPolicy Specifies whether the server should include error details, such as exception messages and stack traces, in HTTP response messages. Initializer A function that performs final initialization of the HttpConfiguration. MessageHandlers HTTP message handlers - plug into pipeline ParameterBindingRules A collection of rules for binding parameters on controller actions. Properties A generic property bag. Routes The collection of routes. Services The collection of services. · Configure JsonFormatter for circular references to support links: PreserveReferencesHandling.Objects Documentation generation · create a help page for a web API, by using the ApiExplorer class. · The ApiExplorer class provides descriptive information about the APIs exposed by a web API as an ApiDescription collection · create the help page as an MVC view public ILookup<string, ApiDescription> GetApis()         {             return _explorer.ApiDescriptions.ToLookup(                 api => api.ActionDescriptor.ControllerDescriptor.ControllerName); · provide documentation for your APIs by implementing the IDocumentationProvider interface. Documentation strings can come from any source that you like – e.g. extract XML comments or define custom attributes to apply to the controller [ApiDoc("Gets a product by ID.")] [ApiParameterDoc("id", "The ID of the product.")] public HttpResponseMessage Get(int id) · GlobalConfiguration.Configuration.Services – add the documentation Provider · To hide an API from the ApiExplorer, add the ApiExplorerSettingsAttribute Plugging into the Message Handler pipeline · Plug into request / response pipeline – derive from DelegatingHandler and override theSendAsync method – e.g. for logging error codes, adding a custom response header · Can be applied globally or to a specific route Exception Handling · Throw HttpResponseException on method failures – specify HttpStatusCode enum value – examine this enum, as its values map well to typical op problems · Exception filters – derive from ExceptionFilterAttribute & override OnException. Apply on Controller or action methods, or add to global HttpConfiguration.Filters collection · HttpError object provides a consistent way to return error information in the HttpResponseException response body. · For model validation, you can pass the model state to CreateErrorResponse, to include the validation errors in the response public HttpResponseMessage PostProduct(Product item) {     if (!ModelState.IsValid)     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState); Cookie Management · Cookie header in request and Set-Cookie headers in a response - Collection of CookieState objects · Specify Expiry, max-age resp.Headers.AddCookies(new CookieHeaderValue[] { cookie }); Internet Media Types, formatters and serialization · Defaults to application/json · Request Accept header and response Content-Type header · determines how Web API serializes and deserializes the HTTP message body. There is built-in support for XML, JSON, and form-urlencoded data · customizable formatters can be inserted into the pipeline · POCO serialization is opt out via JsonIgnoreAttribute, or use DataMemberAttribute for optin · JSON serializer leverages NewtonSoft Json.NET · loosely structured JSON objects are serialzed as JObject which derives from Dynamic · to handle circular references in json: json.SerializerSettings.PreserveReferencesHandling =    PreserveReferencesHandling.All à {"$ref":"1"}. · To preserve object references in XML [DataContract(IsReference=true)] · Content negotiation Accept: Which media types are acceptable for the response, such as “application/json,” “application/xml,” or a custom media type such as "application/vnd.example+xml" Accept-Charset: Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding: Which content encodings are acceptable, such as gzip. Accept-Language: The preferred natural language, such as “en-us”. o Web API uses the Accept and Accept-Charset headers. (At this time, there is no built-in support for Accept-Encoding or Accept-Language.) · Controller methods can take JSON representations of DTOs as params – auto-deserialization · Typical JQuery GET request: function find() {     var id = $('#prodId').val();     $.getJSON("api/products/" + id,         function (data) {             var str = data.Name + ': $' + data.Price;             $('#product').text(str);         })     .fail(         function (jqXHR, textStatus, err) {             $('#product').text('Error: ' + err);         }); }            · Typical GET response: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"TomatoSoup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] True OData support · Leverage Query Options $filter, $orderby, $top and $skip to shape the results of controller actions annotated with the [Queryable]attribute. [Queryable]  public IQueryable<Supplier> GetSuppliers()  · Query: ~/Suppliers?$filter=Name eq ‘Microsoft’ · Applies the following selection filter on the server: GetSuppliers().Where(s => s.Name == “Microsoft”)  · Will pass the result to the formatter. · true support for the OData format is still limited - no support for creates, updates, deletes, $metadata and code generation etc · vnext: ability to configure how EditLinks, SelfLinks and Ids are generated Self Hosting no dependency on ASPNET or IIS: using (var server = new HttpSelfHostServer(config)) {     server.OpenAsync().Wait(); Tracing · tracability tools, metrics – e.g. send to nagios · use your choice of tracing/logging library, whether that is ETW,NLog, log4net, or simply System.Diagnostics.Trace. · To collect traces, implement the ITraceWriter interface public class SimpleTracer : ITraceWriter {     public void Trace(HttpRequestMessage request, string category, TraceLevel level,         Action<TraceRecord> traceAction)     {         TraceRecord rec = new TraceRecord(request, category, level);         traceAction(rec);         WriteTrace(rec); · register the service with config · programmatically trace – has helper extension methods: Configuration.Services.GetTraceWriter().Info( · Performance tracing - pipeline writes traces at the beginning and end of an operation - TraceRecord class includes aTimeStamp property, Kind property set to TraceKind.Begin / End Security · Roles class methods: RoleExists, AddUserToRole · WebSecurity class methods: UserExists, .CreateUserAndAccount · Request.IsAuthenticated · Leverage HTTP 401 (Unauthorized) response · [AuthorizeAttribute(Roles="Administrator")] – can be applied to Controller or its action methods · See section in WebApi document on "Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth" – adapt this to STS.--> Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer. http://zamd.net/2012/05/04/claim-based-security-for-asp-net-web-apis-using-dotnetopenauth/ · Use MVC membership provider infrastructure and add a DelegatingHandler child class to the WebAPI pipeline - http://stackoverflow.com/questions/11535075/asp-net-mvc-4-web-api-authentication-with-membership-provider - this will perform the login actions · Then use AuthorizeAttribute on controllers and methods for role mapping- http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/ · Alternate option here is to rely on MVC App : http://forums.asp.net/t/1831767.aspx/1

    Read the article

  • BEA WebLogic Server 9.2 MP4

    - by Hiroyuki Yoshino
    ??(2010/12/14??)????BEA WebLogic Platform 9.2 JP Media Pack???BEA WebLogic Server 9.2 MP4????????????? (MP : Maintenance Pack) ??OS?WebLogic Server??????????????????(WebLogic Portal, WebLogic Integration)??????????????BEA WebLogic Server 9.2 MP3??????????????????????????·?????????????????????????????????????????? ?????????BEA WebLogic Platform 9.x??Premier Support End?2011?11??????????????????????????????????????????????? Premier Support End???????????·???????????????????????? ??????????????Oracle Technology Network(OTN)??Upgrade Center?????????????????·?????????????????????????????????????????????

    Read the article

  • ETPM/OUAF 2.3.1 Framework Overview - Session 3

    - by MHundal
    The OUAF Framework Session 3 is now available. This session covered the following topics: 1. UI Maps - the generation of display of UI Maps in the system based on the setup of the Business Object.  Tips and tricks for generating the UI Map. 2. BPA Scripts - how scripts have changed using the different step types.  Overview of the BPA Scripts. 3. Case Study - a small presentation of using the different options available when implementing requirements. 4. Revision Control - the options for revision control of configuration objects in ETPM. You can stream the recording using the following link: https://oracletalk.webex.com/oracletalk/ldr.php?AT=pb&SP=MC&rID=70894897&rKey=243f49614fd5d9c6 You can download the recording using the following link: https://oracletalk.webex.com/oracletalk/lsr.php?AT=dw&SP=MC&rID=70894897&rKey=863c9dacce78aad2

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • Changing CSS with jQuery syntax in Silverlight using jLight

    - by Timmy Kokke
    Lately I’ve ran into situations where I had to change elements or had to request a value in the DOM from Silverlight. jLight, which was introduced in an earlier article, can help with that. jQuery offers great ways to change CSS during runtime. Silverlight can access the DOM, but it isn’t as easy as jQuery. All examples shown in this article can be looked at in this online demo. The code can be downloaded here.   Part 1: The easy stuff Selecting and changing properties is pretty straight forward. Setting the text color in all <B> </B> elements can be done using the following code:   jQuery.Select("b").Css("color", "red");   The Css() method is an extension method on jQueryObject which is return by the jQuery.Select() method. The Css() method takes to parameters. The first is the Css style property. All properties used in Css can be entered in this string. The second parameter is the value you want to give the property. In this case the property is “color” and it is changed to “red”. To specify which element you want to select you can add a :selector parameter to the Select() method as shown in the next example.   jQuery.Select("b:first").Css("font-family", "sans-serif");   The “:first” pseudo-class selector selects only the first element. This example changes the “font-family” property of the first <B></B> element to “sans-serif”. To make use of intellisense in Visual Studio I’ve added a extension methods to help with the pseudo-classes. In the example below the “font-weight” of every “Even” <LI></LI> is set to “bold”.   jQuery.Select("li".Even()).Css("font-weight", "bold");   Because the Css() extension method returns a jQueryObject it is possible to chain calls to Css(). The following example show setting the “color”, “background-color” and the “font-size” of all headers in one go.   jQuery.Select(":header").Css("color", "#12FF70") .Css("background-color", "yellow") .Css("font-size", "25px");   Part 2: More complex stuff In only a few cases you need to change only one style property. More often you want to change an entire set op style properties all in one go.  You could chain a lot of Css() methods together. A better way is to add a class to a stylesheet and define all properties in there. With the AddClass() method you can set a style class to a set of elements. This example shows how to add the “demostyle” class to all <B></B> in the document.   jQuery.Select("b").AddClass("demostyle");   Removing the class works in the same way:   jQuery.Select("b").RemoveClass("demostyle");   jLight is build for interacting with to the DOM from Silverlight using jQuery. A jQueryObjectCss object can be used to define different sets of style properties in Silverlight. The over 60 most common Css style properties are defined in the jQueryObjectCss class. A string indexer can be used to access all style properties ( CssObject1[“background-color”] equals CssObject1.BackgroundColor). In the code below, two jQueryObjectCss objects are defined and instantiated.   private jQueryObjectCss CssObject1; private jQueryObjectCss CssObject2;   public Demo2() { CssObject1 = new jQueryObjectCss { BackgroundColor = "Lime", Color="Black", FontSize = "12pt", FontFamily = "sans-serif", FontWeight = "bold", MarginLeft = 150, LineHeight = "28px", Border = "Solid 1px #880000" }; CssObject2 = new jQueryObjectCss { FontStyle = "Italic", FontSize = "48", Color = "#225522" }; InitializeComponent(); }   Now instead of chaining to set all different properties you can just pass one of the jQueryObjectCss objects to the Css() method. In this case all <LI></LI> elements are set to match this object.   jQuery.Select("li").Css(CssObject1); When using the jQueryObjectCss objects chaining is still possible. In the following example all headers are given a blue backgroundcolor and the last is set to match CssObject2.   jQuery.Select(":header").Css(new jQueryObjectCss{BackgroundColor = "Blue"}) .Eq(-1).Css(CssObject2);   Part 3: The fun stuff Having Silverlight call JavaScript and than having JavaScript to call Silverlight requires a lot of plumbing code. Everything has to be registered and strings are passed back and forth to execute the JavaScript. jLight makes this kind of stuff so easy, it becomes fun to use. In a lot of situations jQuery can call a function to decide what to do, setting a style class based on complex expressions for example. jLight can do the same, but the callback methods are defined in Silverlight. This example calls the function() method for each <LI></LI> element. The callback method has to take a jQueryObject, an integer and a string as parameters. In this case jLight differs a bit from the actual jQuery implementation. jQuery uses only the index and the className parameters. A jQueryObject is added to make it simpler to access the attributes and properties of the element. If the text of the listitem starts with a ‘D’ or an ‘M’ the class is set. Otherwise null is returned and nothing happens.   private void button1_Click(object sender, RoutedEventArgs e) { jQuery.Select("li").AddClass(function); }   private string function(jQueryObject obj, int index, string className) { if (obj.Text[0] == 'D' || obj.Text[0] == 'M') return "demostyle"; return null; }   The last thing I would like to demonstrate uses even more Silverlight and less jLight, but demonstrates the power of the combination. Animating a style property using a Storyboard with easing functions. First a dependency property is defined. In this case it is a double named Intensity. By handling the changed event the color is set using jQuery.   public double Intensity { get { return (double)GetValue(IntensityProperty); } set { SetValue(IntensityProperty, value); } }   public static readonly DependencyProperty IntensityProperty = DependencyProperty.Register("Intensity", typeof(double), typeof(Demo3), new PropertyMetadata(0.0, IntensityChanged));   private static void IntensityChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var i = (byte)(double)e.NewValue; jQuery.Select("span").Css("color", string.Format("#{0:X2}{0:X2}{0:X2}", i)); }   An animation has to be created. This code defines a Storyboard with one keyframe that uses a bounce ease as an easing function. The animation is set to target the Intensity dependency property defined earlier.   private Storyboard CreateAnimation(double value) { Storyboard storyboard = new Storyboard(); var da = new DoubleAnimationUsingKeyFrames(); var d = new EasingDoubleKeyFrame { EasingFunction = new BounceEase(), KeyTime = KeyTime.FromTimeSpan(TimeSpan.FromSeconds(1.0)), Value = value }; da.KeyFrames.Add(d); Storyboard.SetTarget(da, this); Storyboard.SetTargetProperty(da, new PropertyPath(Demo3.IntensityProperty)); storyboard.Children.Add(da); return storyboard; }   Initially the Intensity is set to 128 which results in a gray color. When one of the buttons is pressed, a new animation is created an played. One to animate to black, and one to animate to white.   public Demo3() { InitializeComponent(); Intensity = 128; }   private void button2_Click(object sender, RoutedEventArgs e) { CreateAnimation(255).Begin(); }   private void button3_Click(object sender, RoutedEventArgs e) { CreateAnimation(0).Begin(); }   Conclusion As you can see jLight can make the life of a Silverlight developer a lot easier when accessing the DOM. Almost all jQuery functions that are defined in jLight use the same constructions as described above. I’ve tried to stay as close as possible to the real jQuery. Having JavaScript perform callbacks to Silverlight using jLight will be described in more detail in a future tutorial about AJAX or eventing.

    Read the article

  • Custommer Centric Wealth Management

    - by michael.seback
    While the world continues to search their way out of the recent financial turmoil and recession, it has no doubt churned out the inherent faults in the wealth management industry and the larger financial system. In order to counter these apprehensions, wealth management firms are now actively seeking and evaluating avenues to re-build the lost trust. They are looking at engaging their customers in managing their investments in a more collaborative and transparent manner. At the same time, wealth managers are also seeking to empower themselves with complete and comprehensive customer information in order to provide the best advice and the best solution at the right time. Read your copy of this new global White Paper on Wealth Management.

    Read the article

  • Part 3: Customization Strategy or how long does it take

    - by volker.eckardt(at)oracle.com
    The previous part in this blog should have made us aware, that many procedures are required to manage all these steps. To review your status let me ask you a question:What is your Customization Strategy?Your answer might be something like, 'customization strategy, well, we have standards and we let requirement documents approve'.Let me ask you another question:How long does it take to redeploy all your customizations into a fresh installation?In 90% of all installations the answer to this question would be: we can't!Although no one would have to do it (hopefully), just thinking about it and recognizing that we have today too many manual steps involved, different procedures and sometimes (undocumented) manual steps to complete a customization installation. And ... in general too many customizations.Why is working with customizations often so complicated and time consuming?Here are the key reasons as I have identified them in my projects:Customization standards defined, but not maintainedDifferent knowledge on developer side (results getting an individual developer touch)No need to automate deployment (not forced by client)Different documentation styles, not easy to hand over to someone elseDifferent development concepts, difficult for the maintenanceJust the minimum present for testing, often positive testing onlyDeviations from naming conventions accepted, although definedComplicated procedures, therefore sometimes partially ignoredAnd last but not least, hand made version control (still)If you would have to 'redeploy all your customizations' you would have to Follow all your own standards and best practiceTrack deviations and define corrective tasksAutomate as much as possible, minimize manual tasksDo not allow any change coming in without version controlUtilize products to support you in deploymentMinimize hand made scripts and extensive documentationReview regularly used techniques to guarantee that all are in line with the current release and also easy maintainableCreate solution libraries and force the team to contribute and reuseDefine quality activities and execute themDefine a procedure to release customizationsI know, it is easy to write down, but much harder to manage. Will provide some guidelines in my next blog.Volker

    Read the article

  • Siebel-Oracle Integrations

    Aseem Chandra, Senior Director, Industry Strategy talks to Cliff about how customers can benefit now from packaged integration of Siebel CRM with Oracle back office applications, and the roadmap to upgrade to Fusion Applications.

    Read the article

  • Writing a SQL Azure Book - Notes

    - by Herve Roggero
    Over the last few months I have had the opportunity to ramp up significantly on SQL Azure.  In fact I will be the co-author of Pro SQL Azure, published by Apress. This is going to be a book on how to best leverage SQL Azure, both from a technology and design standpoint. Talking about design, one of the things I realized is that understanding the key limitations and boundary parameters of Azure in general, and more specifically SQL Azure, will play an important role in making sounds design decisions that both meet reasonable performance requirements and minimize the costs associated with running a cloud computing solution.   The book touches on many design considerations including link encryption, pricing model, design patterns, and also some important performance techniques that need to be leveraged when developing in Azure, including Caching, Lazy Properties and more.   Finally I started working with Shards and how to implement them in Azure to ensure database scalability beyond the current size limitations. Implementing shards is not simple, and the book will address how to create a shard technology within your code to provide a scale-out mechanism for your SQL Azure databases.   As you can see, there are many factors to consider when designing a SQL Azure database. While we can think of SQL Azure as a cloud version of SQL Server, it is best to look at it as a new platform to make sure you don’t make any assumptions on how to best leverage it.

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

  • Know more about shared pool subpool

    - by Liu Maclean(???)
    ????T.askmaclean.com???Shared Pool?SubPool?????,????????_kghdsidx_count ? subpool ??subpool????( ???duration)???: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> set linesize 200 pagesize 1400 SQL> show parameter kgh NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ _kghdsidx_count                      integer                          7 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11783.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11783.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036110 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x60036110 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f938 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f938 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049160 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049160 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052988 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052988 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c1b0 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c1b0 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659d8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659d8 HEAP DUMP heap name="sga heap(7,0)"  desc=0x6006f200 FIVE LARGEST SUB HEAPS for heap name="sga heap(7,0)"   desc=0x6006f200 SQL> alter system set "_kghdsidx_count"=6 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  859832320 bytes Fixed Size                  2100104 bytes Variable Size             746587256 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11908.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11908.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360f0 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x600360f0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f918 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f918 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049140 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049140 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052968 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052968 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c190 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c190 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659b8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659b8 SQL> SQL> alter system set "_kghdsidx_count"=2 scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12003.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12003.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d SQL> alter system set cpu_count=16 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL>  oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12065.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12065.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d8 SQL> show parameter sga_target NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ sga_target                           big integer                      0 SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL>  oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12148.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12148.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(1,1)"  desc=0x60037ee8 HEAP DUMP heap name="sga heap(1,2)"  desc=0x60039740 HEAP DUMP heap name="sga heap(1,3)"  desc=0x6003af98 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 HEAP DUMP heap name="sga heap(2,1)"  desc=0x60041710 HEAP DUMP heap name="sga heap(2,2)"  desc=0x60042f68 _enable_shared_pool_durations:?????????10g????shared pool duration??,?????sga_target?0?????false; ???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? SQL> alter system set "_enable_shared_pool_durations"=false scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12233.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options\ [oracle@vrh8 dbs]$ grep "sga heap"   /s01/admin/G10R25/udump/g10r25_ora_12233.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 ??:1._kghdsidx_count ??? shared pool subpool???, _kghdsidx_count???????7 ??? 7? shared pool subpool 2.??????? subpool???4? sub partition ?: sga heap(1,0) sga heap(1,1) sga heap(1,2) sga heap(1,3) ????? cpu??? ?????_kghdsidx_count, ???? ?10g ?AUTO SGA ??? shared pool duration???, duration ??4?: Session duration Instance duration (never freed) Execution duration (freed fastest) Free memory ??? shared pool duration???? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) reference : http://www.oracledatabase12g.com/archives/understanding-automatic-sga-memory-management.html

    Read the article

  • Conky to Monitor WLS Managed Servers

    - by John Graves
    I've been using a little utility on my linux-based machines for years called conky.  It can be used to monitor system resources, but I wanted to modify it to monitor my WebLogic managed servers too. Once installing conky, you'll need to update the .conkyrc file.  Here is a simple example. Basically, the important lines are these: - Admin (7001) ${if_empty ${exec /usr/sbin/lsof -i :7001 | grep LISTEN}}${color red}DOWN${color} ${else}${color green} UP ${color}(${tcp_portmon 7001 7001 count}) ${endif} - OSB (8011) ${if_empty ${exec /usr/sbin/lsof -i :8011 | grep LISTEN}}${color red}DOWN${color} ${else}${color green} UP ${color}(${tcp_portmon 8011 8011 count}) ${endif} - BAM (9001) ${if_empty ${exec /usr/sbin/lsof -i :9001 | grep LISTEN}}${color red}DOWN${color} ${else}${color green} UP ${color}(${tcp_portmon 9001 9001 count}) ${endif} - DB (1521) ${if_empty ${exec /usr/sbin/lsof -i :1521 | grep LISTEN}}${color red}DOWN${color} ${else}${color green} UP ${color}(${tcp_portmon 1521 1521 count}) ${endif} It uses lsof to find out if ports are in use. Here is a video showing it in action.

    Read the article

  • Q&amp;A: What is the UK pricing for the Windows Azure CDN?

    - by Eric Nelson
    The pricing for Windows Azure Content Delivery Network (CDN) was announced last week. The prices are: £0.091 per GB transferred from North America & Europe locations £0.1213 per GB transferred from other locations £0.0061 per 10,000 transactions CDN rates are effective for all billing periods that begin subsequent to June 30, 2010. All usage for billing periods beginning prior to July 1, 2010 will not be charged. To help you determine which pricing plan best suits your needs, please review the comparison table, which includes the CDN information. Steven Nagy has also done an interesting follow up post on CDN. Related Links: Q&A- How can I calculate the TCO and ROI when considering the Windows Azure Platform? Q&A- When do I get charged for compute hours on Windows Azure? Q&A- What are the UK prices for the Windows Azure Platform

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >