Search Results

Search found 25794 results on 1032 pages for 'enterprise service automation'.

Page 49/1032 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • is there a specialized educational institution in enterprise software design ?

    - by dfafa
    Is a software engineering degree sufficient for being able to design efficient code in enterprise architecture ? I mean that's what I want to do, some people go to game schools (Vancouver Film School) to make games or work in that industry. are there such similar programs for enterprise software design/development ? Are there special courses in Java EE space and .NET ? is it suitable to just focus on java or both ? My ultimate goal would be consulting and developing enterprise software independently....but right now, I am starting school and just keep learning on the side. any guidance to resources on this industry would be appreciated or your insights. Thank you.

    Read the article

  • Do I need to buy Mysql cluster enterprise edition?

    - by Arman
    Hello, we have a ms-sql 2008 standard edition. The db became too huge, about 8 10^9 records.the db files are about 4.5tb each. We cannot effort us to get enterprise edition to slice the database. We need partitioning. So the idea is to use Mysql cluster with many datanodes. We already started to move data. I wondered do we need to buy a licens for mysqlcluster?are there performance difference between community edition and commercial one? Thanks Arman.

    Read the article

  • What happens when Oracle's Enterprise Single-Sign-On database goes down? [migrated]

    - by Unai
    We're working on setting up Oracle's Enterprise Single-Sign-On with High Availability. At the moment every component provides HA except our database backend (i.e. we have just one instance). While conducting some kick-the-plug tests we learnt that the ESSO system works even with the database turned OFF. This was a nice surprise but now we need to understand what are the implications of a database failure; sure the sessions might be handled on the application servers and the policies might have been cached but... for how long? how big is this cache? what is the role of the database? I would appreciate if anyone shares her/his experience and/or points out to documentation that covers this. Thank you so much.

    Read the article

  • Must I have Exchange to use Blackberry Enterprise Server Express?

    - by John Spaz
    In the past I've setup BES (not express) for a company that just wanted their users on the corporate network, they didn't care for email or any other enterprise feature, they just wanted to push a policy that the phones internet should be routed through the corporate network. I want to setup BES Express now for a customer that also just wants the phones on his network but wherever I look, it says that BES Express requires Exchange. Is there a way to install BES Express without Exchange and without a AD Domain? Basically what the customer wants to accomplish is to be able to filter and log the internet access on the phones.

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • First Step Towards Rapid Enterprise Application Deployment

    - by Antoinette O'Sullivan
    Take Oracle VM Server for x86 training as a first step towards deploying enterprise applications rapidly. You have a choice between the following instructor-led training: Oracle VM with Oracle VM Server for x86 1-day Seminar. Take this course from your own desk on one of the 300 events on the schedule. This seminar tells you how to build a virtualization platform using the Oracle VM Manager and Oracle VM Server for x86 and to sustain the deployment of highly configurable, inter-connected virtual machines. Oracle VM Administration: Oracle VM Server for x86 3-day hands on course. This course teaches you how to build a virtualization platform using the Oracle VM Manager and Oracle VM Server for x86. You learn how deploy and manage highly configurable, inter-connected virtual machines. The course teaches you how to install and configure Oracle VM Server for x86 as well as details of network and storage configuration, pool and repository creation, and virtual machine management.Take this course from your own desk on one of the 450 events on the schedule. You can also take this course in an Oracle classroom on one of the following events:  Location  Date  Delivery Language  Istanbul, Turkey  12 November 2012  Turkish  Wellington, New Zealand  10 Dec 2012  English  Roseveille, United States  19 November 2012  English  Warsaw, Poland  17 October 2012  Polish  Paris, France  17 October 2012  French  Paris, France  21 November 2012  French  Dusseldorfm Germany  5 November 2012  German For more information on Oracle's Virtualization courses see http://oracle.com/education/vm

    Read the article

  • Oracle Congratulates Winners of the 2012 Oracle Excellence Award: Eco-Enterprise Innovation

    - by Evelyn Neumayr
    Oracle recently held its fifth annual Eco-Enterprise Innovation awards ceremony during Oracle OpenWorld in San Francisco. Oracle Chairman of the Board, Jeff Henley, awarded select customers for their use of Oracle products to help with their sustainability initiatives. During this session, several award recipients discussed how they embedded various sustainability strategies throughout their organizations to help reduce their costs as well as their environmental footprint. It was an interesting session based around green best business practices and how Oracle products enabled many of these customers’ sustainability efforts. The winning customers for 2012 are: Dena Bank, Earth Rangers Centre, Grupo Pão de Açúcar, Health Authority – Abu Dhabi, Korean Air, North County Transit District, Orlando Utilities Commission, Ricoh – Europe, Schneider Electric, Severn Trent Water, and Terracap. Several of these winning customers also selected a partner to co-accept the award with them. These winning partners played a major role in helping these customers achieve their sustainability-related efforts.. Oracle also awarded Ian Winham, Executive Vice President and Chief Financial Officer from Ricoh Europe, with Oracle's Chief Sustainability Officer of the Year award. Ricoh Europe is a multinational imaging and electronics company with a strong commitment to sustainability. Ian was honored for his leadership in reducing Ricoh's environmental impacts by leveraging Oracle's applications and underlying technology. See here for more details.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • Oracle Enterprise Content Management 11gR1 Patch Set 3 Released

    - by michelle.huff
    We're pleased to announce an updated patch set for Oracle Enterprise Content Management 11gR1 PS3 (11.1.1.4.0). Patch Set 3 (PS3) supports additional platforms and applications, and adds several new features to the products. Highlights include: Content Server (repository for UCM, URM & I/PM): New security capabilities, file store provider updates. Desktop Integration Suite: Windows 7 64-bit and Office 2010 (32 & 64-bit) support and new "Recent Content Items" menu. Universal Content Management (UCM): Site Studio Manager for Site Studio for External Applications, new template management options and ability to run Site Studio & Site Studio for External Applications 11g components on Content Server 10gR3. Imaging and Process Management (I/PM): Now certified with Oracle Business Process Management (BPM) 11g, Oracle Single Sign On (OSSO) 10g and Oracle Access Manager (OAM) 10g, export search results to Microsoft Excel. ECM Adapter for PeopleSoft: Support for UCM 11g Managed Attachments (support for 10g released earlier in 2010) and certification with PeopleTools 8.50. Information Rights Management (IRM): Desktop support for Microsoft Office 2010, Adobe Reader X and Microsoft SharePoint 2010. Customer Webcast We'll be covering this new release in our Quarterly Customer Update Webcast scheduled for this week, January 19/20, 2011. Register today. More Information Downloads now available on Oracle Technology Network (OTN) - it will be available via eDelivery soon. Read the updated ECM documentation for 11.1.1.4.0 Review the ECM 11.1.1.4.0 Upgrade & Patch Guides See the Release Notes

    Read the article

  • Instructor Insight: Dealing with Columns in Oracle JD Edwards Enterprise One Tools Release 9.1

    - by Breanne Cooley
    Oracle JD Edwards Enterprise One Tools Release  9.1 has many new features that will help end users be more efficient in their daily jobs. For example, hiding grid columns is now as easy as a left-mouse click. In earlier releases, users could click on the ‘Customize Grid’ link but still had to do several more clicks to hide or show a column . The following example shows how easy this new feature is to use. First, right-mouse click on the column you want to hide; for example the ‘Long Address’ column. The column is now hidden. Second, right-mouse over on any of the columns to show the ‘Unhide’ option. After you select ‘Unhide’, the hidden column is shown. You can then select the column to show, or unhide, the column. This new feature and others are covered in the JD Edwards EnterpriseOne System Administration Rel 9.x course, which has been updated to reflect the new release. Hope to see you in class! -Randy Richeson, Senior Principal Instructor, Oracle University

    Read the article

  • What is an Enterprise Resource Planning (ERP) System?

    In order to understand what an Enterprise Resource Planning System is let us look at a classic American kids snack, the Rice Krispy Treat if we conceptually view the treat as a company’s internal applications as a whole.  Furthermore we can view a company’s departmentalized software applications as the theoretical Rice Krispies in the treat. In addition, the Rice Krispies consist of a combination of ingredients that be broken down into data, user interfaces and business logic. Next, we have the margarine or butter that is used to help the marshmallows bind with the Rice Krispies; this role in our conceptual view is taken by a data source typically as a relational database management system. Finally we have the melted marshmallows which act as the ERP software that connects all of the individual departmental software applications in to one unified system that allows all user one unified system to interact with all of the individual dispersed systems. An example of this would be if a customer places an order with a telephone operator and once the orders is processed an employee in the shipping department can see the order ready for fulfillment on his order screen. The ERP acts a go between for various independent departmental systems so that they can integrate with one another.

    Read the article

  • The Enterprise Side of JavaFX: Part Two

    - by Janice J. Heiss
    A new article, part of a three-part series, now up on the front page of otn/java, by Java Champion Adam Bien, titled “The Enterprise Side of JavaFX,” shows developers how to implement the LightView UI dashboard with JavaFX 2. Bien explains that “the RESTful back end of the LightView application comes with a rudimentary HTML page that is used to start/stop the monitoring service, set the snapshot interval, and activate/deactivate the GlassFish monitoring capabilities.”He explains that “the configuration view implemented in the org.lightview.view.Browser component is needed only to start or stop the monitoring process or set the monitoring interval.”Bien concludes his article with a general summary of the principles applied:“JavaFX encourages encapsulation without forcing you to build models for each visual component. With the availability of bindable properties, the boundary between the view and the model can be reduced to an expressive set of bindable properties. Wrapping JavaFX components with ordinary Java classes further reduces the complexity. Instead of dealing with low-level JavaFX mechanics all the time, you can build simple components and break down the complexity of the presentation logic into understandable pieces. CSS skinning further helps with the separation of the code that is needed for the implementation of the presentation logic and the visual appearance of the application on the screen. You can adjust significant portions of an application's look and feel directly in CSS files without touching the actual source code.”Check out the article here.

    Read the article

  • RequireJS: JavaScript for the Enterprise

    - by Geertjan
    I made a small introduction to RequireJS via some of the many cool new RequireJS features in NetBeans IDE. I believe RequireJS, and the modularity and encapsulation and loading solutions that it brings, provides the tools needed for creating large JavaScript applications, i.e., enterprise JavaScript applications. &amp;amp;lt;span id=&amp;amp;quot;XinhaEditingPostion&amp;amp;quot;&amp;amp;gt;&amp;amp;lt;/span&amp;amp;gt; (Sorry for the wobbly sound in the above.) An interesting comment by my colleague John Brock on the above: One other advantage that RequireJS brings, is called lazy loading of resources. In your first example, everyone one of those .js files is loaded when the first file is loaded in the browser. By using the require() call in your modules, your application will only load the javascript modules when they are actually needed. It makes for faster startup in large applications. You could show this by showing the libraries that are loaded in the Network Monitor window. So I did as suggested: Click the screenshot to enlarge it and notice how the Network Monitor is helpful in the context of RequireJS troubleshooting.

    Read the article

  • C# WCF - Failed to invoke the service.

    - by Keith Barrows
    I am getting the following error when trying to use the WCF Test Client to hit my new web service. What is weird is every once in awhile it will execute once then start popping this error. Failed to invoke the service. Possible causes: The service is offline or inaccessible; the client-side configuration does not match the proxy; the existing proxy is invalid. Refer to the stack trace for more detail. You can try to recover by starting a new proxy, restoring to default configuration, or refreshing the service. My code (interface): [ServiceContract(Namespace = "http://rivworks.com/Services/2010/04/19")] public interface ISync { [OperationContract] bool Execute(long ClientID); } My code (class): public class Sync : ISync { #region ISync Members bool ISync.Execute(long ClientID) { return model.Product(ClientID); } #endregion } My config (EDIT - posted entire serviceModel section): <system.serviceModel> <diagnostics performanceCounters="Default"> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <serviceHostingEnvironment aspNetCompatibilityEnabled="false" /> <behaviors> <endpointBehaviors> <behavior name="JsonpServiceBehavior"> <webHttp /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="SimpleServiceBehavior"> <serviceMetadata httpGetEnabled="True" policyVersion="Policy15"/> </behavior> <behavior name="RivWorks.Web.Service.ServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> </behaviors> <services> <service name="RivWorks.Web.Service.NegotiateService" behaviorConfiguration="SimpleServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="jsonpBinding" behaviorConfiguration="JsonpServiceBehavior" contract="RivWorks.Web.Service.NegotiateService" /> <!--<host> <baseAddresses> <add baseAddress="http://kab.rivworks.com/services"/> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="RivWorks.Web.Service.NegotiateService" />--> <endpoint address="mex" binding="mexHttpBinding" contract="RivWorks.Web.Service.NegotiateService" /> </service> <service name="RivWorks.Web.Service.Sync" behaviorConfiguration="RivWorks.Web.Service.ServiceBehavior"> <endpoint address="" binding="wsHttpBinding" contract="RivWorks.Web.Service.ISync" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <extensions> <bindingElementExtensions> <add name="jsonpMessageEncoding" type="RivWorks.Web.Service.JSONPBindingExtension, RivWorks.Web.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bindingElementExtensions> </extensions> <bindings> <customBinding> <binding name="jsonpBinding" > <jsonpMessageEncoding /> <httpTransport manualAddressing="true"/> </binding> </customBinding> </bindings> </system.serviceModel> 2 questions: What am I missing that causes this error? How can I increase the time out for the service? TIA!

    Read the article

  • VB.Net plugin using Matlab COM Automation Server...Error: 'Could not load Interop.MLApp'

    - by Ben
    My Problem: I am using Matlab COM Automation Server to call and execute matlab .m files from a VB.Net plugin for a CAD program called Rhino 3D. The code works flawlessly when set up as a simple Windows Application in Visual Studio, but when I insert it (and make the requisite reference) into my .Net plugin and test it in the CAD program I get the following error: "Could not load file or assembly 'Interop.MLApp, Version 1.0.0.0, culture=neutral, PublicKeyToken=null' or one of its dependencies. the system cannot find the file specified." What I've Tried: I am baffled as to why this occurs, but I was able to contact the CAD program's technical support staff and they suggested that it has something to do with their DotNet SDK having trouble with references that are located far outside the CAD program directory. They didn't have any solutions so I tried playing around with copylocal and this made no difference. I tried using other COM libraries and the Open Office automation server works fine, although uses url's instead of requiring a reference. I also tested Excel, which does require a reference, and it returned the error: "retrieving the COM class factory for component with CLSID {...} failed due to the following error: 80040154." This may or may not be related to the issue with the Matlab COM reference, but I thought was worthwhile to share. Perhaps is there another way to reference Interop.MLApp? I would appreciate any suggestions or thoughts on how I might make the Matlab Interop.MLApp reference work. Best regards, Ben

    Read the article

  • Automation Error upon running VBA script in Excel

    - by brohjoe
    Hi guys, I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = _ "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0', & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. cnt.Close If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing End If End Sub

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • Web Automation Tool

    - by Aaron
    I've realized I need a full-fledged browser automation tool for testing user interactions with our JavaScript widget library. I was using qunit, starting with unit testing and then I unwisely started incorporating more and more functional tests. That was a bad idea: trying to simulate a lot of user actions with JavaScript. The timing issues have gotten out of control and have made the suite too brittle. Now I spend more time fixing the tests, then I do developing. Is it possible to find a browser automation tool that works in: Windows XP: IE6,7,8, FF3 OSX: Safari, FF3 ? I've looked into SeleniumIDE and RC, but there seems to be some IE8 problems. I've also seen some things about Google's WebDriver, which confusingly seems to work with Selenium. Our organziation has licenses for IBM's Rational Functional Tester, but I don' think that will work on the MAC. The idea is to try to run tests on all the browsers our organization supports. Doable? Are my requirements unrealistic? Any recommendations as far as software to try? Thanks!

    Read the article

  • Excel Automation Addin UDFs not accesible

    - by Eric
    I created the following automation addin: namespace AutomationAddin { [Guid("6652EC43-B48C-428a-A32A-5F2E89B9F305")] [ClassInterface(ClassInterfaceType.AutoDual)] [ComVisible(true)] public class MyFunctions { public MyFunctions() { } #region UDFs public string ToUpperCase(string input) { return input.ToUpper(); } #endregion [ComRegisterFunctionAttribute] public static void RegisterFunction(Type type) { Registry.ClassesRoot.CreateSubKey( GetSubKeyName(type, "Programmable")); RegistryKey key = Registry.ClassesRoot.OpenSubKey( GetSubKeyName(type, "InprocServer32"), true); key.SetValue("", System.Environment.SystemDirectory + @"\mscoree.dll", RegistryValueKind.String); } [ComUnregisterFunctionAttribute] public static void UnregisterFunction(Type type) { Registry.ClassesRoot.DeleteSubKey( GetSubKeyName(type, "Programmable"), false); } private static string GetSubKeyName(Type type, string subKeyName) { System.Text.StringBuilder s = new System.Text.StringBuilder(); s.Append(@"CLSID\{"); s.Append(type.GUID.ToString().ToUpper()); s.Append(@"}\"); s.Append(subKeyName); return s.ToString(); } } } I build it and it registers just fine. I open excel 2003, go to tools-Add-ins, click on the automation button and the addin appears in the list. I add it and it shows up in the addins list. but, the functions themselves don't appear. If I type it in it doesn't work and if I look in the function wizard, my addin doesn't show up as a category and the functions are not in the list. I am using excel 2003 on windows 7 x86. I built the project with visual studio 2010. This addin worked fine on windows xp built with visual studio 2008.

    Read the article

  • About Interview structure for test automation lab developers

    - by Ikaso
    Hi, I am interviewing new applicants for a team that is doing test automation on our company product(s). The team is composed of junior software developers and a team leader. The product runs on windows and has both managed and unmanaged parts. The test automation is done on both client side (user mode and kernel mode) and server side (IIS, Windows Services, backend). We are doing mainly intergration tests and black box tests. I am trying to figure out how to organize my interview. My overall idea is to ask about a project they have done, then ask some technical questions (multithreading, GC, design patterns) and one programming question. Please note that there is another interview done before me with 2 programming questions. My programming question is rather simple (for example: reversing a singly-linked linked list). My coworkers think that my questions will not find good developers since my questions are rather simple and well known, but so far most of the applicants fail those questions. My questions are: Should I change the structure of my interview for this kind of job? What questions do you ask to figure our if the applicant is test oriented? (Maybe I should provide a buggy implementation of a problem and let them find the bugs and then ask them about what tests they would have done) Regards,

    Read the article

  • Infrastructure and Platform As A Service in Private Cloud at Lawrence Livermore National Laboratory

    - by Anand Akela
    Scientists at the National Ignition Facility (NIF)— the world’s largest laser, at the Lawrence Livermore National Laboratory (LLNL)— need research environment that requires re-creating the physical environment and conditions that exist inside the sun. They have built private cloud infrastructure using Oracle VM and Oracle Enterprise Manager 12c to provision such an environment for research.  Tim Frazier of LLNL joined the "Managing Your Private Cloud With Oracle Enterprise Manager' session at Oracle Open World 2012 and discussed how the latest features in Oracle VM and Oracle Enterprise Manager 12c enables them to accelerate application provisioning in their private cloud. He also talked about how to increase service delivery agility, improve standardized roll outs, and do proactive management to gain total control of the private cloud environment. He also presented at the "Scene and Be Heard Theater" at Oracle OpenWorld 2012 and shared a lot of good information about his project and what they are doing in their private cloud environment. Learn more by looking at Tim's presentation .

    Read the article

  • Securely expose WebService from Enterprise Network to Internet Client

    - by hotzen
    Are there any standards (or certified solutions) to expose a (Web-)Service to the internet from a very security-sensitive network (e.g. Banking/Finance)? I am not specifically talking about WS-* or any other transport-layer security á la SSL/TLS, rather about important standards or certifications that must be obeyed. Are there any known products (coming from an SAP-environment) that can provide a "high-security proxy" of some sort to expose specific web-services to the internet? Any buzzwords that a CIO/CTO is aware of about this subject?

    Read the article

  • When is onBind or onCreate called in an android service browser plugin?

    - by anselm
    I have adapted the example plugin of the android source and the browser recognises the plugin without any problem. Here is an extract of AndroidManifest.xml: <application android:icon="@drawable/icon" android:label="@string/app_name" android:debuggable="true"> <service android:name="com.domain.plugin.PluginService"> <intent-filter> <action android:name="android.webkit.PLUGIN" /> </intent-filter> </service> </application> <uses-sdk android:minSdkVersion="7" /> <uses-permission android:name="android.webkit.permission.PLUGIN"></uses-permission> The actual Service class looks like so: public class PluginService extends Service { @Override public IBinder onBind(Intent arg0) { Log.d("PluginService", "onBind"); return null; } @Override public void onCreate() { Log.d("PluginService", "onCreate"); // TODO Auto-generated method stub super.onCreate(); AssetInstaller.getInstance(this).installAssets("/data/data/com.domain.plugin"); } } The AssetInstaller code is supposed to extract some files required by the actual plugin into the /data/data/com.domain.plugin directory, however wether onBind nor onCreate are called. But I get lot's of debug trace of the actual libnpplugin.so file I'm using. So the puzzle is when and under what circumstance is the Service bound or created in case of a browser plugin. As things look the service seems to be a dummy service. Having said that, is there another intent that can be executed at installation time probably? The only solution I see right now is installing the needed files from the native plugin code instead. Any ideas? I know this is quite a tricky question ;)

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >