Search Results

Search found 4247 results on 170 pages for 'mark unwin'.

Page 21/170 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • How to ensure images all loaded before I reference in my HTML canvas [closed]

    - by mark stephens
    I want to draw some images in on a HTML canvas with context.drawImage(Im1 ,205,18,184,38); In order to make sure it loads I need to put in code like this but then I cannot draw things with it var Im1 = new Image(); Im1.src="rechnung11014page1/img/1/Im1.png"; Im1.onload = function() { context.drawImage(Im1 ,205,18,184,38); } Is there a way to load all the images and then execute a block of code using several images?

    Read the article

  • Silverlight Cream for December 18, 2010 -- #1012

    - by Dave Campbell
    In this Issue: Mark Monster, Kevin Dockx, Jeremy Likness(-2-,-3-), Timmy Kokke, Den Delimarsky, Mike Snow, Samuel Jack(-2-), and Renuka Prasad(-2-). Above the Fold: Silverlight: "Trigger a Storyboard on ViewModel changes" Mark Monster WP7: "Microsoft Push Notification in Windows Phone 7" Renuka Prasad Shoutouts: SilverlightGal sent me the link to The Silverlight Dossier ... I think it's a pretty good start... additions I'd like to see are ways to submit to the various areas. Michael Crump put up a contest that runs from now to January 1st... Win a set of Infragistics Silverlight Controls with Data Visualization!... pretty cool, Michael! If you visit WynApse.com, you'll see I have a subscription to LearnVisualStudio.net... and now they have posted a batch of WP7 videos... 64 of them to be exact... wow!: New video series From SilverlightCream.com: Trigger a Storyboard on ViewModel changes Mark Monster has a great post up about triggering Storyboard on ViewModel changes using the DataTrigger from Blend... cool stuff, and you can also do GoToStateAction or other actions or build yourowndang Trigger Action... fun awaits! ... sorry it took a while to post, Mark... been a tad overloaded here! Working with the Silverlight Rich Text Box control Kevin Dockx has had a post up for a while at SilverlightShow where he takes a good look at the RichText control and it's various capabilities, including source so you can give it a dance yourself. Lessons Learned in Personal Web Page Part 3: Custom Panel and Listbox Jeremy Likness's part 3 of his Personal Web Page lessons learned is covering the tres-cool 3D Panel he did... and he's got it all explained out... building from scratch via a custom panel and a Listbox control... A Silverlight MVVM Feed Reader from Scratch in 30 Minutes Jeremy Likness has a video tutorial showing building an MVVM/Silverlight feedreader in 30 minutes ... plus a couple mods that he noticed after the fact... beat that HTML5 :) Jounce Part 8: Raising Property Changed In Jeremy Likness's latest post, he has number 8 in his series on his MVVM platform, Jounce. This time he's explaining the property changed notification, has a very cool way of doing it, and some interesting comments from readers. Dependency Injection, MVVM, Ninject and Silverlight Timmy Kokke has a great tutorial up with associated demo project on Dependency Injection in MVVM and Silverlight. Some hidden features in the Windows Phone 7 emulator Den Delimarsky shows how to get some of the hidden features on your WP7 emulator like the Call History, Call Settings, and Details about the numbers. Playing sound effects on Windows Phone 7 Mike Snow's latest tip is playing sound effects on your WP7 ... a little bit of XNA here and there, and badabing, badaboom, you got sound! Day 3 of my “Build a Windows Phone 7 game in 3 days” Challenge Samuel Jack has a couple more posts up about his 'Build a WP7 game in 3 Days' challenge... first up is Day 3 from 8:50 to 22:30 ... wow... long day! ... but he's got something good going now... some good external links also Day 3.5 of my “Build a Windows Phone 7 game in 3 days” Challenge Samuel Jack's 3rd day ended with another half-day added on to put on some finishing touches... again, some good external links... and he finished with this Say hello to Simon Squared, my 3.5 day old WP7 Game Microsoft Push Notification in Windows Phone 7 Renuka Prasad has a bunch of material up that I've not been aware of (how did that happen, people??) ... here's the first of a couple of his posts on Code Project ... a very nice tutorial on the Push Notification process... great diagrams and external links. Windows Phone 7 – Toast Notification Using Windows Azure Cloud Service Renuka Prasad has another WP7 post on CodeProject... this one on Toast Notification... and he's using Azure and WCF all rolled into it as well... great diagrams, descriptions and all the code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Nautilus tags/labels/marks/columns for folders/files

    - by madox2
    Is there any way how to mark folders or files with tags(or labels, new columns or whatever) in Nautilus? It would be nice to sort marked folders or files through this tags. Especially my first idea was to mark folders in my Movie directory with tags seen, not seen, must see, and so on. Then I realized it would be useful in any other workspaces with any custom tags... Is there any nautilus extension for this? Or any other file manager which can do this? It might look like this:

    Read the article

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • Summit Time!

    - by Ajarn Mark Caldwell
    Boy, how time flies!  I can hardly believe that the 2011 PASS Summit is just one week away.  Maybe it snuck up on me because it’s a few weeks earlier than last year.  Whatever the cause, I am really looking forward to next week.  The PASS Summit is the largest SQL Server conference in the world and a fantastic networking opportunity thrown in for no additional charge.  Here are a few thoughts to help you maximize the week. Networking As Karen Lopez (blog | @DataChick) mentioned in her presentation for the Professional Development Virtual Chapter just a couple of weeks ago, “Don’t wait until you need a new job to start networking.”  You should always be working on your professional network.  Some people, especially technical-minded people, get confused by the term networking.  The first image that used to pop into my head was the image of some guy standing, awkwardly, off to the side of a cocktail party, trying to shmooze those around him.  That’s not what I’m talking about.  If you’re good at that sort of thing, and you can strike up a conversation with some stranger and learn all about them in 5 minutes, and walk away with your next business deal all but approved by the lawyers, then congratulations.  But if you’re not, and most of us are not, I have two suggestions for you.  First, register for Don Gabor’s 2-hour session on Tuesday at the Summit called Networking to Build Business Contacts.  Don is a master at small talk, and at teaching others, and in just those two short hours will help you with important tips about breaking the ice, remembering names, and smooth transitions into and out of conversations.  Then go put that great training to work right away at the Tuesday night Welcome Reception and meet some new people; which is really my second suggestion…just meet a few new people.  You see, “networking” is about meeting new people and being friendly without trying to “work it” to get something out of the relationship at this point.  In fact, Don will tell you that a better way to build the connection with someone is to look for some way that you can help them, not how they can help you. There are a ton of opportunities as long as you follow this one key point: Don’t stay in your hotel!  At the least, get out and go to the free events such as the Tuesday night Welcome Reception, the Wednesday night Exhibitor Reception, and the Thursday night Community Appreciation Party.  All three of these are perfect opportunities to meet other professionals with a similar job or interest as you, and you never know how that may help you out in the future.  Maybe you just meet someone to say HI to at breakfast the next day instead of eating alone.  Or maybe you cross paths several times throughout the Summit and compare notes on different sessions you attended.  And you just might make new friends that you look forward to seeing year after year at the Summit.  Who knows, it might even turn out that you have some specific experience that will help out that other person a few months’ from now when they run into the same challenge that you just overcame, or vice-versa.  But the point is, if you don’t get out and meet people, you’ll never have the chance for anything else to happen in the future. One more tip for shy attendees of the Summit…if you can’t bring yourself to strike up conversation with strangers at these events, then at the least, after you sit through a good session that helps you out, go up to the speaker and introduce yourself and thank them for taking the time and effort to put together their presentation.  Ideally, when you do this, tell them WHY it was beneficial to you (e.g. “Now I have a new idea of how to tackle a problem back at the office.”)  I know you think the speakers are all full of confidence and are always receiving a ton of accolades and applause, but you’re wrong.  Most of them will be very happy to hear first-hand that all the work they put into getting ready for their presentation is paying off for somebody. Training With over 170 technical sessions at the Summit, training is what it’s all about, and the training is fantastic!  Of course there are the big-name trainers like Paul Randall, Kimberly Tripp, Kalen Delaney, Itzik Ben-Gan and several others, but I am always impressed by the quality of the training put on by so many other “regular” members of the SQL Server community.  It is amazing how you don’t have to be a published author or otherwise recognized as an “expert” in an area in order to make a big impact on others just by sharing your personal experience and lessons learned.  I would rather hear the story of, and lessons learned from, “some guy or gal” who has actually been through an issue and came out the other side, than I would a trained professor who is speaking just from theory or an intellectual understanding of a topic. In addition to the three full days of regular sessions, there are also two days of pre-conference intensive training available.  There is an extra cost to this, but it is a fantastic opportunity.  Think about it…you’re already coming to this area for training, so why not extend your stay a little bit and get some in-depth training on a particular topic or two?  I did this for the first time last year.  I attended one day of extra training and it was well worth the time and money.  One of the best reasons for it is that I am extremely busy at home with my regular job and family, that it was hard to carve out the time to learn about the topic on my own.  It worked out so well last year that I am doubling up and doing two days or “pre-cons” this year. And then there are the DVDs.  I think these are another great option.  I used the online schedule builder to get ready and have an idea of which sessions I want to attend and when they are (much better than trying to figure this out at the last minute every day).  But the problem that I have run into (seems this happens every year) is that nearly every session block has two different sessions that I would like to attend.  And some of them have three!  ACK!  That won’t work!  What is a guy supposed to do?  Well, one option is to purchase the DVDs which are recordings of the audio and projected images from each session so you can continue to attend sessions long after the Summit is officially over.  Yes, many (possibly all) of these also get posted online and attendees can access those for no extra charge, but those are not necessarily all available as quickly as the DVD recording are, and the DVDs are often more convenient than downloading, especially if you want to share the training with someone who was not able to attend in person. Remember, I don’t make any money or get any other benefit if you buy the DVDs or from anything else that I have recommended here.  These are just my own thoughts, trying to help out based on my experiences from the 8 or so Summits I have attended.  There is nothing like the Summit.  It is an awesome experience, fantastic training, and a whole lot of fun which is just compounded if you’ll take advantage of the first part of this article and make some new friends along the way.

    Read the article

  • SOA online seminar by Griffiths Waite &ndash; adopt Fusion Applications patterns today

    - by Jürgen Kress
    Our SOA Specialized partner Griffiths Waite developed a series of Oracle Fusion Middleware online seminars. Mark Simpson Oracle ACE Director gives an insight of the Oracle strategy, how Oracle is using Fusion Middleware to build Fusion Applications and how you can profit in your project from the Fusion Architecture. Giving examples how customers can adopt use cases for Application Integration & Composite Application Portals & Application Modernization & Business Process Management. If you are interested make sure you watch the online seminar and take the SOA Maturity Assessment For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: Mark Simpson,Griffiths Waite,Fusion Middleware,Fusion Applications,SOA,Oracle,SOA Community,OPN,SOA Specialization,Specialization,Jürgen Kress

    Read the article

  • How can I make a permanently updated copy of a file in a different place to the original file?

    - by Mark
    I use two computers, a Linux one for coding and building and a Windows one which has the programming application to load the built program onto the hardware. Both computers have access to a network drive which I use to pass the files from Linux to Windows. My problem is, that every time I build I have to copy the files from where the are created to the network drive. How can I make some sort of file in the network drive on Ubuntu that always mirrors the file which is built in the different location, like a pointer? Thanks!

    Read the article

  • Fix for EF4 Profiler Issue Coming in next Cumulative Update

    - by Ajarn Mark Caldwell
    Hey!  What do you know?  Microsoft Connect really works! I was very happy this morning to open my email and find a notice from Umachandar on the SQL Programmability Team that they have created a fix for the Odd Profiler Results with EF4 issue that I wrote about last June.  Not only did I blog about it, but I logged an item to Connect with repro steps and sample code.  And now, they have announced that they have a fix for this problem and that it will be included in the next Cumulative Update for SQL Server 2008 R2. For those of you not running 2008 R2, or who prefer to wait for full Service Packs rather than install the latest Cumulative Updates, I also wrote about a workaround for the issue, as long as you do not require the Multiple Active Result Sets feature to be enabled. It is easy with Microsoft to get the feeling that you’re just shouting in the wind, and it is nice to get validation once in a while that they really are listening.

    Read the article

  • Get Smarter Just By Listening

    - by mark.wilcox
    Occasionally my friends ask me what do I listen/read to keep informed. So I thought I would like to post an update. First - there is an entirely new network being launched by Jason Calacanis called "ThisWeekIn". They have weekly shows on variety of topics including Startups, Android, Twitter, Cloud Computing, Venture Capital and now the iPad. If you want to keep ahead (and really get motivated) - I totally recommend listening to at least This Week in Startups. I also find Cloud Computing helpful. I also like listening to the Android show so that I can see how it's progressing. Because while I love my iPhone/iPad - it's  important to keep the competition in the game up to improve everything. I'm also not opposed to switching to Android if something becomes as nice experience - but so far - my take on Android devices are  - 10 years ago, I would have jumped all over them because of their hackability. But now, I'm in a phase, where I just want these devices to work and most of my creation is in non-programming areas - I find the i* experience better. Second - In terms of general entertaining tech news - I'm a big fan of This Week in Tech. Finally - For a non-geek but very informative show - The Kevin Pollack Show on ThisWeekIn network gets my highest rating. It's basically two-hours of in-depth interview with a wide variety of well-known comedian and movie stars. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • Can't complete dropbox installation from behind proxy

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • Configuration "diff" across Oracle WebCenter Sites instances

    - by Mark Fincham-Oracle
    Problem Statement With many Oracle WebCenter Sites environments - how do you know if the various configuration assets and settings are in sync across all of those environments? Background At Oracle we typically have a "W" shaped set of environments.  For the "Production" environments we typically have a disaster recovery clone as well and sometimes additional QA environments alongside the production management environment. In the case of www.java.com we have 10 different environments. All configuration assets/settings (CSElements, Templates, Start Menus etc..) start life on the Development Management environment and are then published downstream to other environments as part of the software development lifecycle. Ensuring that each of these 10 environments has the same set of Templates, CSElements, StartMenus, TreeTabs etc.. is impossible to do efficiently without automation. Solution Summary  The solution comprises of two components. A JSON data feed from each environment. A simple HTML page that consumes these JSON data feeds.  Data Feed: Create a JSON WebService on each environment. The WebService is no more than a SiteEntry + CSElement. The CSElement queries various DB tables to obtain details of the assets/settings returning this data in a JSON feed. Report: Create a simple HTML page that uses JQuery to fetch the JSON feed from each environment and display the results in a table. Since all assets (CSElements, Templates etc..) are published between environments they will have the same last modified date. If the last modified date of an asset is different in the JSON feed or is mising from an environment entirely then highlight that in the report table. Example Solution Details Step 1: Create a Site Entry + CSElement that outputs JSON Site Entry & CSElement Setup  The SiteEntry should be uncached so that the most recent configuration information is returned at all times. In the CSElement set the contenttype accordingly: Step 2: Write the CSElement Logic The basic logic, that we repeat for each asset or setting that we are interested in, is to query the DB using <ics:sql> and then loop over the resultset with <ics:listloop>. For example: <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> "templates": [ <ics:listloop listname="TemplateList"> {"name":"<ics:listget listname="TemplateList"  fieldname="name"/>", "modified":"<ics:listget listname="TemplateList"  fieldname="updateddate"/>"}, </ics:listloop> ], A comprehensive list of SQL queries to fetch each configuration asset/settings can be seen in the appendix at the end of this article. For the generation of the JSON data structure you could use Jettison (the library ships with the 11.1.1.8 version of the product), native Java 7 capabilities or (as the above example demonstrates) you could roll-your-own JSON output but that is not advised. Step 3: Create an HTML Report The JavaScript logic looks something like this.. 1) Create a list of JSON feeds to fetch: ENVS['dev-mgmngt'] = 'http://dev-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json'; ENVS['dev-dlvry'] = 'http://dev-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-mngmnt'] = 'http://test-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-dlvry'] = 'http://test-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';   2) Create a function to get the JSON feeds: function getDataForEnvironment(url){ return $.ajax({ type: 'GET', url: url, dataType: 'jsonp', beforeSend: function (jqXHR, settings){ jqXHR.originalEnv = env; jqXHR.originalUrl = url; }, success: function(json, status, jqXHR) { console.log('....success fetching: ' + jqXHR.originalUrl); // store the returned data in ALLDATA ALLDATA[jqXHR.originalEnv] = json; }, error: function(jqXHR, status, e) { console.log('....ERROR: Failed to get data from [' + url + '] ' + status + ' ' + e); } }); } 3) Fetch each JSON feed: for (var env in ENVS) { console.log('Fetching data for env [' + env +'].'); var promisedData = getDataForEnvironment(ENVS[env]); promisedData.success(function (data) {}); }  4) For each configuration asset or setting create a table in the report For example, CSElements: 1) Get a list of unique CSElement names from all of the returned JSON data. 2) For each unique CSElement name, create a row in the table  3) Select 1 environment to represent the master or ideal state (e.g. "Everything should be like Production Delivery") 4) For each environment, compare the last modified date of this envs CSElement to the master. Highlight any differences in last modified date or missing CSElements. 5) Repeat...    Appendix This section contains various SQL statements that can be used to retrieve configuration settings from the DB.  Templates  <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> CSElements <ics:sql sql="SELECT name,updateddate FROM CSElement WHERE status != 'VO'" listname="CSEList" table="CSElement" /> Start Menus <ics:sql sql="select sm.id, sm.cs_name, sm.cs_description, sm.cs_assettype, sm.cs_assetsubtype, sm.cs_itemtype, smr.cs_rolename, p.name from StartMenu sm, StartMenu_Sites sms, StartMenu_Roles smr, Publication p where sm.id=sms.ownerid and sm.id=smr.cs_ownerid and sms.pubid=p.id order by sm.id" listname="startList" table="Publication,StartMenu,StartMenu_Roles,StartMenu_Sites"/>  Publishing Configurations <ics:sql sql="select id, name, description, type, dest, factors from PubTarget" listname="pubTargetList" table="PubTarget" /> Tree Tabs <ics:sql sql="select tt.id, tt.title, tt.tooltip, p.name as pubname, ttr.cs_rolename, ttsect.name as sectname from TreeTabs tt, TreeTabs_Roles ttr, TreeTabs_Sect ttsect,TreeTabs_Sites ttsites LEFT JOIN Publication p  on p.id=ttsites.pubid where p.id is not null and tt.id=ttsites.ownerid and ttsites.pubid=p.id and tt.id=ttr.cs_ownerid and tt.id=ttsect.ownerid order by tt.id" listname="treeTabList" table="TreeTabs,TreeTabs_Roles,TreeTabs_Sect,TreeTabs_Sites,Publication" />  Filters <ics:sql sql="select name,description,classname from Filters" listname="filtersList" table="Filters" /> Attribute Types <ics:sql sql="select id,valuetype,name,updateddate from AttrTypes where status != 'VO'" listname="AttrList" table="AttrTypes" /> WebReference Patterns <ics:sql sql="select id,webroot,pattern,assettype,name,params,publication from WebReferencesPatterns" listname="WebRefList" table="WebReferencesPatterns" /> Device Groups <ics:sql sql="select id,devicegroupsuffix,updateddate,name from DeviceGroup" listname="DeviceList" table="DeviceGroup" /> Site Entries <ics:sql sql="select se.id,se.name,se.pagename,se.cselement_id,se.updateddate,cse.rootelement from SiteEntry se LEFT JOIN CSElement cse on cse.id = se.cselement_id where se.status != 'VO'" listname="SiteList" table="SiteEntry,CSElement" /> Webroots <ics:sql sql="select id,name,rooturl,updatedby,updateddate from WebRoot" listname="webrootList" table="WebRoot" /> Page Definitions <ics:sql sql="select pd.id, pd.name, pd.updatedby, pd.updateddate, pd.description, pdt.attributeid, pa.name as nameattr, pdt.requiredflag, pdt.ordinal from PageDefinition pd, PageDefinition_TAttr pdt, PageAttribute pa where pd.status != 'VO' and pa.id=pdt.attributeid and pdt.ownerid=pd.id order by pd.id,pdt.ordinal" listname="pageDefList" table="PageDefinition,PageAttribute,PageDefinition_TAttr" /> FW_Application <ics:sql sql="select id,name,updateddate from FW_Application where status != 'VO'" listname="FWList" table="FW_Application" /> Custom Elements <ics:sql sql="select elementname from ElementCatalog where elementname like 'CustomElements%'" listname="elementList" table="ElementCatalog" />

    Read the article

  • Oracle's Vision for the Social-Enabled Enterprise

    - by Peggy Chen
    Register Now Join us for the Webcast. Mon., Sept. 10, 2012 10 a.m. PT / 1 p.m. ET Join the conversation: #oracle and #socbiz Mark Hurd President, Oracle Thomas Kurian Executive Vice President, Product Development, Oracle Reggie Bradford Senior Vice President, Product Development, Oracle Dear Colleague, Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Oracle is powering this transformation with the most comprehensive enterprise social platform that lets you: Monitor and engage in social conversations Collect and analyze social data Build and grow brands through social media Integrate enterprisewide social functionality into a single system Create rich social applications Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle’s vision for the social-enabled enterprise. Register now for this Webcast. Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • How was Git designed?

    - by Mark Canlas
    My workplace recently switched to Git and I've been loving (and hating!) it. I really do love it, and it is extremely powerful. The only part I hate is that sometimes it's too powerful (and maybe a bit terse/confusing). My question is... How was Git designed? Just using it for a short amount of time, you get the feel that it can handle many obscure workflows that other version control systems could not. But it also feels elegant underneath. And fast! This is no doubt in part to Linus's talent. But I'm wondering, was the overall design of git based off of something? I've read about BitKeeper but the accounts are scant on technical details. The compression, the graphs, getting rid of revision numbers, emphasizing branching, stashing, remotes... Where did it all come from? Linus really knocked this one out of the park and on pretty much the first try! It's quite good to use once you're past the learning curve.

    Read the article

  • Email sent via Google via relayhost being marked as spam

    - by Mark H
    Company email hosted by Google Apps. Company PBX in-house is Elastix. All voicemails received on the extensions of Elastix are supposed to be emailed by the CentOS server (Postfix) to the email address of the employee. Using relayhost on postfix, I am sending those emails through Google Apps (smtp.gmail.com), but some of these voicemail emails end up in the spam. Sending it through Google, and sending it to an email hosted by Google - yet there's spam. Email sent from the Google Apps interface - no complaints of it going to spam - just from the Elastix server. I've just asked our DNS domain guys to add spf records, but is that all that's needed? Some help please!

    Read the article

  • June 17, 2010 Webcast - 5 Security Tips To Reduce Cost Using Oracle Directory Services

    - by mark.wilcox
    We're delivering another webcast on June 17 (next week!): 5 Security Tips To Reduce Cost Using Oracle Directory Services  Organizations with business units spread around the world face costly and time consuming security concerns. However, many of these companies are forced to deal with increased scrutiny and security demands while resources are reduced. This live webcast focuses on concrete ways IT organizations can use directory services to do more with less.  Posted via email from Virtual Identity Dialogue

    Read the article

  • SharePoint Saturday DC

    - by Mark Rackley
    Wow… did you see this thing? 927 attendees? An exhibition hall full of vendors? 94 speakers? 100 sessions?? Insane is a word that comes to mind… SharePoint Saturday DC was definitely epic as far as SharePoint Saturdays go. I got to catch up with a lot of friends and make some new ones.  Met a couple of fans of the blog (hello ladies…;))  Did you know that people actually read this thing? I guess that means I need to stop putting so much garbage on here and more content. I’ll get right on that as soon as I find out how to add 6 hours to each day. Anyway, once again I did my “Wrapping Your Head Around the SharePoint Beast” session.  I tweaked it even more from Huntsville and presented to a packed room with some people sitting on the floor and standing in the aisles. It was a great crowd, very interactive and they seemed interested at least. Thank you guys so much for attending and please feel free to tell me of any suggestions you have to make the presentation better.  This is one of the presentations that will probably never die. Everyone beginning SharePoint development needs a good introduction and starting point. My goal is to make this THE session to see on the subject. So, a little interesting data about my class. Half of the room was brand new to SharePoint and only one person was using 2010. That tells me that this session still has legs and that 2007 isn’t going anywhere anytime soon.  I know my organization will be using 2007 for at least a couple more years. Oh yeah… the slide deck?  Here it is: SharePoint Saturday DC Slide Deck So, SharePoint Saturday was truly tremendous and if you weren’t there you missed out. @meetdux, @usher, and the rest of their crew did a spectacular job. You guys rock and are a huge asset to the community. Thanks for allowing me to speak. What’s up next for me?  I’m so glad you asked…. SHAREPOINT SATURDAY OZARKS IS JUNE 12TH! Although SharePoint Saturday Ozarks on June 12 in Harrison Arkansas will be a much more intimate event than DC, it promises to be a most memorable event. We’ve got over 30 speakers and sessions, some cool stuff to give away, and we’re going floating down the Buffalo River on the 13th. Let’s see you do THAT in DC.  :) Anyway, I hope to see you there and I would truly appreciate any help you can do to help publicize the event. We just got internet here in the hills and most people here are still looking for the “any” key….

    Read the article

  • How should I incorporate a hotfix back into a feature branch using gitflow?

    - by Mark Trapp
    I've started using gitflow for a project, and I have an outstanding feature branch as well as a newly created hotfix. Per the gitflow workflow, the hotfix gets applied to both the master and develop branches, but nothing is said or done about extant feature branches. Nevertheless, I'd like to incorporate the hotfix changes back into my feature branch, which as near as I can tell leaves three options: Don't incorporate the changes. If the changes were needed for the feature branch, it should've been part of the feature branch. Merge develop back into the feature branch. This seems to follow the gitflow workflow the best, but would cause out-of-order commits. Rebase the feature branch onto develop. This would preserve commit order but rebasing seems to be completely absent from the general gitflow workflow. What's the best practice here?

    Read the article

  • Oracle Magazine - OWB 11gR2 and Heterogeneous Databases

    - by David Allan
    There's a nice article titled 'Oracle Warehouse Builder 11g Release 2 and Heterogeneous Databases' from Oracle ACE director and cofounder of Rittman Mead Consulting, Mark Rittman in the May/June 2010 Oracle Magazine that covers the heterogeneous database support in OWB 11gR2: http://www.oracle.com/technology/oramag/oracle/10-may/o30bi.html Big thanks to Mark for this write up. There is an Oracle white paper on the support here and for examples of this extensibility you can go to the OWB blog archive where there are quite a few posts. I would recommend the following interesting posts out of the archive architecture overview, bulk file loading, MySQL open connectivity and MySQL bulk extract as interesting posts amongst others.

    Read the article

  • How to change the default editor of a specific file type in JDeveloper

    - by [email protected]
    When you open a file in JDeveloper, the mode that is used as the default might not be what you as a developer want.  If, for example, every time you open a .jsp(x) file you click on the source tab at the bottom of the window so that you can edit the jsp(x) file in source code mode, you may want to consider changing the default editor for that file type.  This is easy to do in the JDeveloper tool preferences and can be a time saver in the long run, since some editors can take a while to start up and if you don't need them often, this would just be lost time.  Here are the steps:  From the JDeveloper menu, select Tools->Preferences...Select "File Types" in the tree component on the left side of the preferences dialog.Click on the "Default Editors" tab.Scroll to the file type you want to change.In the details section at the bottom of the dialog, use the "Default Editor" select list to change the default to your liking.

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • How to use MythBuntu to send TV signal to a 2nd frontend

    - by Mark Preston
    I guess the a MythTV or MythBuntu backend acts as a "server" for the frontends. I have MythBuntu installed. It runs fine, I can tune live TV, hear the sound, etc. To get this to work, I had to config the Wired Network IP4V settings to Method: Link-Local Only. The Local Backend IP address is: 127.0.0.1 and the info (bottom of screen) says that if there is another frontend, that this IP add. must be changed. 1 - Does this mean changed to the IP address of the 2nd frontend? 2 - What "Method" do I use to make 2 or more frontends? 3 - I have an ethernet switch which currently "sees" the tv signal, sends it to the computer's ethernet port where Mythbuntu makes use of it. 4 - How do I set up the Myth to send it's output (the tv shows) to both televisions? If you know of a How-To, or website, please give the URL or identifying keywords.

    Read the article

  • BPM 11g - Dynamic Task Assignment with Multi-level Organization Units

    - by Mark Foster
    I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process. Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit. The Use-Case Task assignment is to an approval group containing several users. At runtime, a location value in the input data determines which of the particular users the task is ultimately assigned to. In this case we use the Demo Community referenced in the SOA Admin Guide, and specifically the "LoanAnalyticGroup" which contains three users; "szweig", "mmitch" & "fkafka". In our scenario we would like to assign a task to "szweig" if the input data specifies that the location is "JapanCentral", to "fkafka" if the location is "JapanNorth" and to "mmitch" if "JapanSouth", and to all of them if the location is "Japan" i.e....   The Process Simple one human task process.... In the output data association of the "Start" activity we need to set the value of the "Organization Unit" predefined variable based on the input data (note that the  predefined variables can only be set on output data associations)....  ...and in the output data association of the human activity we will reset the "Organization Unit" to empty, always good practice to ensure that the Organization Unit will not be used for any subsequent human activities for which we do not require it.... Set Up the Organization Unit  Log in to the BPM Workspace with an administrator user (weblogic/welcome1 in our case) and choose the "Administration" option. Within "Roles" assign the "ProcessOwner" swim-lane for our process to "LoanAnalyticGroup".... Within "Organization Units" we can model our organization.... "Root Organization Unit" as "Japan" and "Child Organization Unit" as "Central", "South" & "North" as shown. As described previously, add user "szweig" to "Central", "mmitch" to "South" and "fkafka" to "North"....   Test the Process Invalid Data  First let us test with invalid data in the input to see what the consequences are, here we use "X" as input.... ...and looking at the instance we can see it has errored.... Organization Unit Root Level Assignment  Now let us see what happens if we have "Japan" in the input data.... ...looking in the "flow trace" we can see that the task has been assigned....  ... but who has the task been assigned to ? Let us look in the BPM Workspace for user "szweig"....  ...and for "mmitch"....  ... and for "fkafka"....  ...so we can see that with an Organization Unit at "Root" level we have successfully assigned the task to all users. Organization Unit Child Level Assignment  Now let us test with "Japan/North" in the input data.... ...and looking in "fkafka" workspace we see the task has been assigned, remember, he was associated with "JapanNorth"....   ... but what about the workspace of "szweig"....  ...no tasks assigned, neither has "mmitch", just as we expected. Summary  We have seen in this blog how to easily implement multi-level dynamic task routing using Organization Units, a common use-case and a simpler solution than Parametric Roles. 

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-07

    - by Bob Rhubart
    Exalogic Webcast Series: Rethink Your Business Application Deployment Strategy Learn best practices for simplifying IT operations while delivering the application performance that a business needs. These on-demand Sessions include: Faster and Easier: Deploying ERP Applications on Oracle Exalogic Redefining the CRM and E-Commerce Experience with Oracle Exalogic The Road to a Cloud-Enabled, Infinitely Elastic Application Infrastructure Virtualization at Oracle - Six Part Series Links to all six articles in the series by Matthias Pfuetzner and Detlef Drewanz, spanning SPARC and x86. WebCenter Content shared folders for clustering | Kyle Hatlestad A-Team blogger Kyle Hatlestad shares the details on "how the file systems should be split and what options are required." Eclipse DemoCamp - June 2012 - Redwood Shores, CA When: Wednesday, June 13, 2012. 6:00pm - 9:00pm Where: Oracle HQ - 10 Twin Dolphin Drive, Redwood Shores, CA Presentations: The evolution of Java persistence, Doug Clarke, EclipseLink Project Lead, Oracle Integrating BIRT into Applications, Ashwini Verma, Actuate Corporation Developing Rich ADF Applications with Java EE, Greg Stachnick, Oracle Leveraging OSGi In The Enterprise, Kamal Muralidharan, Lead Engineer, eBay NVIDIA® NsightTM Eclipse Edition, Goodwin (Tech lead - Visual tools), Eugene Ostroukhov (Senior engineer – Visual tools) BI Architecture Master Class for Partners - Oracle Architecture Unplugged When:June 21, 2012 Where: City Office, London, UK This workshop will be highly interactive and is aimed at Oracle OPN member partners who are IT Architects and BI+W specialists. This will be a highly interactive session and does not involve slide presentations or product feature details, it addresses IT-Architectural issues and considerations for the IT-Architect Community. Oracle Fusion Middleware Innovation Awards | Oracle Excellence Awards Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. Oracle Service Bus 11g: listing projects and services with WLST - part 1 | Michel Schildmeijer "For automating and repetitive purposes, as well for uniformity it's always good to have some scripting," says Michel Schildmeijer. Creating an Oracle Endeca Information Discovery 2.3 Application Part 3 : Creating the User Interface | Mark Rittman Oracle ACE Director Mark Rittman continues his article series. WebLogic Advisor WebCasts On-Demand A series of videos by WebLogic experts, available to those with access to support.oracle.com. Integrating OBIEE 11g into Weblogic’s SAML SSO | Andre Correa A-Team blogger Andre Correa illustrates a transient federation scenario. InfoQ: Cloud 2017: Cloud Architectures in 5 Years Andrew Phillips, Mark Holdsworth, Martijn Verburg, Patrick Debois, and Richard Davies review the evolution of cloud computing so far and look five years into the future. Thought for the Day "One cannot make an omelet without breaking eggs – but it is amazing how many eggs one can break without making a decent omelet." — Charles P. Issawi Source: softwarequotes.com

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-06

    - by Bob Rhubart
    Creating an Oracle Endeca Information Discovery 2.3 Application Part 3 : Creating the User Interface | Mark Rittman Oracle ACE Director Mark Rittman continues his article series. WebLogic Advisor WebCasts on-demand A series of videos by WebLogic experts, available to those with access to support.oracle.com. Integrating OBIEE 11g into Weblogic’s SAML SSO | Andre Correa A-Team blogger Andre Correa illustrates a transient federation scenario. InfoQ: Cloud 2017: Cloud Architectures in 5 Years Andrew Phillips, Mark Holdsworth, Martijn Verburg, Patrick Debois, and Richard Davies review the evolution of cloud computing so far and look five years into the future. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 – Part II | Dawit Lessanu The conclusion of Lessanu's two-part series for Service Technology Magazine. Driving from Business Architecture to Business Process Services | Hariharan V. Ganesarethinam "The perfect mixture of EA, SOA and BPM make enterprise IT highly agile so it can quickly accommodate dynamic business strategies, alignments and directions," says Ganesarethinam. "However, there should be a structured approach to drive enterprise architecture to service-oriented architecture and business processes." Book Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Raheja Rajesh Raheja reviews the new AIA book from Packt Publishing. ODTUG Kscope12 - June 24-28 - San Antonio, TX San Antonio, TX June 24-28, 2012 Kscope12, sponsored by ODTUG, is your home for Application Express, BI and Oracle EPM, Database Development, Fusion Middleware, and MySQL training by the best of the best! Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA) | Mahesh Sharma Mahesh Sharma describes EC HA, looks at the prerequisites, and shares screen shots. The right way to transform your business via the cloud | David Linthicum A couple of quick tests will show you where you need to focus your transition efforts. Thought for the Day "Most software isn't designed. Rather, it emerges from the development team like a zombie emerging from a bubbling vat of Research and Development juice. When a discipline is hugging the ragged edge of technology, this might be expected, but most of today's software is comprised of mostly 'D' and very little 'R'." — Alan Cooper Source: softwarequotes.com

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >