Search Results

Search found 4263 results on 171 pages for 'mark elder'.

Page 21/171 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Methods of learning / teaching programming

    - by Mark Avenius
    When I was in school, I had a difficult time getting into programming because of a catch-22 in the learning process: I didn't know how to write anything because I didn't know what keywords and commands meant. For example (as a student, I would think), "what does this using namespace std; thing do anyway? I didn't know what keywords and commands meant because I hadn't written anything. This basically led me to spending countless long night cursing the compiler as I made minor tweaks to my assignments until they would compile (and hopefully perform whatever operation they were supposed to). Is there a teaching/learning method that anyone uses that gets around this catch-22? I am trying to make this non-argumentative, which is why I don't want to know the 'best' method, but rather which methods exist.

    Read the article

  • Finding the Right Solution to Source and Manage Your Contractors

    - by mark.rosenberg(at)oracle.com
    Many of our PeopleSoft Enterprise applications customers operate in service-based industries, and all of our customers have at least some internal service units, such as IT, marketing, and facilities. Employing the services of contractors, often referred to as "contingent labor," to deliver either or both internal and external services is common practice. As we've transitioned from an industrial age to a knowledge age, talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services. Talent comes from many sources and the rise in the contingent worker (contractor, consultant, temporary, part time) has increased significantly in the past decade and is expected to reach 40 percent in the next decade. Managing the total pool of talent in a seamless integrated fashion not only saves organizations money and increases efficiency, but creates a better place for workers of all kinds to work. Although the term "contingent labor" is frequently used to describe both contractors and employees who have flexible schedules and relationships with an organization, the remainder of this discussion focuses on contractors. The term "contingent labor" is used interchangeably with "contractor." Recognizing the importance of contingent labor, our PeopleSoft customers often ask our team, "What Oracle vendor management system (VMS) applications should I evaluate for managing contractors?" In response, I thought it would be useful to describe and compare the three most common Oracle-based options available to our customers. They are:   The enterprise licensed software model in which you implement and utilize the PeopleSoft Services Procurement (sPro) application and potentially other PeopleSoft applications;  The software-as-a-service model in which you gain access to a derivative of PeopleSoft sPro from an Oracle Business Process Outsourcing Partner; and  The managed service provider (MSP) model in which staffing industry professionals utilize either your enterprise licensed software or the software-as-a-service application to administer your contingent labor program. At this point, you may be asking yourself, "Why three options?" The answer is that since there is no "one size fits all" in terms of talent, there is also no "one size fits all" for effectively sourcing and managing contingent workers. Various factors influence how an organization thinks about and relates to its contractors, and each of the three Oracle-based options addresses an organization's needs and preferences differently. For the purposes of this discussion, I will describe the options with respect to (A) pricing and software provisioning models; (B) control and flexibility; (C) level of engagement with contractors; and (D) approach to sourcing, employment law, and financial settlement. Option 1:  Enterprise Licensed Software In this model, you purchase from Oracle the license and support for the applications you need. Typically, you license PeopleSoft sPro as your VMS tool for sourcing, monitoring, and paying your contract labor. In conjunction with sPro, you can also utilize PeopleSoft Human Capital Management (HCM) applications (if you do not already) to configure more advanced business processes for recruiting, training, and tracking your contractors. Many customers choose this enterprise license software model because of the functionality and natural integration of the PeopleSoft applications and because the cost for the PeopleSoft software is explicit. There is no fee per transaction to source each contractor under this model. Our customers that employ contractors to augment their permanent staff on billable client engagements often find this model appealing because there are no fees to affect their profit margins. With this model, you decide whether to have your own IT organization run the software or have the software hosted and managed by either Oracle or another application services provider. Your organization, perhaps with the assistance of consultants, configures, deploys, and operates the software for managing your contingent workforce. This model offers you the highest level of control and flexibility since your organization can configure the contractor process flow exactly to your business and security requirements and can extend the functionality with PeopleTools. This option has proven very valuable and applicable to our customers engaged in government contracting because their contingent labor management practices are subject to complex standards and regulations. Customers find a great deal of value in the application functionality and configurability the enterprise licensed software offers for managing contingent labor. Some examples of that functionality are... The ability to create a tiered network of preferred suppliers including competencies, pricing agreements, and elaborate candidate management capabilities. Configurable alerts and online collaboration for bid, resource requisition, timesheet, and deliverable entry, routing, and approval for both resource and deliverable-based services. The ability to manage contractors with the same PeopleSoft HCM and Projects applications that are used to manage the permanent workforce. Because it allows you to utilize much of the same PeopleSoft HCM and Projects application functionality for contractors that you use for permanent employees, the enterprise licensed software model supports the deepest level of engagement with the contingent workforce. For example, you can: fill job openings with contingent labor; guide contingent workers through essential safety and compliance training with PeopleSoft Enterprise Learning Management; and source contingent workers directly to project-based assignments in PeopleSoft Resource Management and PeopleSoft Program Management. This option enables contingent workers to collaborate closely with your permanent staff on complex, knowledge-based efforts - R&D projects, billable client contracts, architecture and engineering projects spanning multiple years, and so on. With the enterprise licensed software model, your organization maintains responsibility for the sourcing, onboarding (including adherence to employment laws), and financial settlement processes. This means your organization maintains on staff or hires the expertise in these domains to utilize the software and interact with suppliers and contractors. Option 2:  Software as a Service (SaaS) The effort involved in setting up and operating VMS software to handle a contingent workforce leads many organizations to seek a system that can be activated and configured within a few days and for which they can pay based on usage. Oracle's Business Process Outsourcing partner, Provade, Inc., provides exactly this option to our customers. Provade offers its vendor management software as a service over the Internet and usually charges your organization a fee that is a percentage of your total contingent labor spending processed through the Provade software. (Percentage of spend is the predominant fee model, although not the only one.) In addition to lower implementation costs, the effort of configuring and maintaining the software is largely upon Provade, not your organization. This can be very appealing to IT organizations that are thinly stretched supporting other important information technology initiatives. Built upon PeopleSoft sPro, the Provade solution is tailored for simple and quick deployment and administration. Provade has added capabilities to clone users rapidly and has simplified business documents, like work orders and change orders, to facilitate enterprise-wide, self-service adoption with little to no training. Provade also leverages Oracle Business Intelligence Enterprise Edition (OBIEE) to provide integrated spend analytics and dashboards. Although pure customization is more limited than with the enterprise licensed software model, Provade offers a very effective option for organizations that are regularly on-boarding and off-boarding high volumes of contingent staff hired to perform discrete support tasks (for example, order fulfillment during the holiday season, hourly clerical work, desktop technology repairs, and so on) or project tasks. The software is very configurable and at the same time very intuitive to even the most computer-phobic users. The level of contingent worker engagement your organization can achieve with the Provade option is generally the same as with the enterprise licensed software model since Provade can automatically establish contingent labor resources in your PeopleSoft applications. Provade has pre-built integrations to Oracle's PeopleSoft and the Oracle E-Business Suite procurement, projects, payables, and HCM applications, so that you can evaluate, train, assign, and track contingent workers like your permanent employees. Similar to the enterprise licensed software model, your organization is responsible for the contingent worker sourcing, administration, and financial settlement processes. This means your organization needs to maintain the staff expertise in these domains. Option 3:  Managed Services Provider (MSP) Whether you are using the enterprise licensed model or the SaaS model, you may want to engage the services of sourcing, employment, payroll, and financial settlement professionals to administer your contingent workforce program. Firms that offer this expertise are often referred to as "MSPs," and they are typically staffing companies that also offer permanent and temporary hiring services. (In fact, many of the major MSPs are Oracle applications customers themselves, and they utilize the PeopleSoft Solution for the Staffing Industry to run their own business operations.) Usually, MSPs place their staff on-site at your facilities, and they can utilize either your enterprise licensed PeopleSoft sPro application or the Provade VMS SaaS software to administer the network of suppliers providing contingent workers. When you utilize an MSP, there is a separate fee for the MSP's service that is typically funded by the participating suppliers of the contingent labor. Also in this model, the suppliers of the contingent labor (not the MSP) usually pay the contingent labor force. With an MSP, you are intentionally turning over business process control for the advantages associated with having someone else manage the processes. The software option you choose will to a certain extent affect your process flexibility; however, the MSPs are often able to adapt their processes to the unique demands of your business. When you engage an MSP, you will want to give some thought to the level of engagement and "partnering" you need with your contingent workforce. Because the MSP acts as an intermediary, it can be very valuable in handling high volume, routine contracting for which there is a relatively low need for "partnering" with the contingent workforce. However, if your organization (or part of your organization) engages contingent workers for high-profile client projects that require diplomacy, intensive amounts of interaction, and personal trust, introducing an MSP into the process may prove less effective than handling the process with your own staff. In fact, in many organizations, it is common to enlist an MSP to handle contractors working on internal projects and to have permanent employees handle the contractor relationships that affect the portion of the services portfolio focused on customer-facing, billable projects. One of the key advantages of enlisting an MSP is that you do not have to maintain the expertise required for orchestrating the sourcing, hiring, and paying of contingent workers.  These are the domain of the MSPs. If your own staff members are not prepared to manage the essential "overhead" processes associated with contingent labor, working with an MSP can make solid business sense. Proper administration of a contingent workforce can make the difference between project success and failure, operating profit and loss, and legal compliance and fines. Concluding Thoughts There is little doubt that thoughtfully and purposefully constructing a service delivery strategy that leverages the strengths of contingent workers can lead to better projects, deliverables, and business results. What requires a bit more thinking is determining the platform (or platforms) that will enable each part of your organization to best deliver on its mission.

    Read the article

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • How to ensure images all loaded before I reference in my HTML canvas [closed]

    - by mark stephens
    I want to draw some images in on a HTML canvas with context.drawImage(Im1 ,205,18,184,38); In order to make sure it loads I need to put in code like this but then I cannot draw things with it var Im1 = new Image(); Im1.src="rechnung11014page1/img/1/Im1.png"; Im1.onload = function() { context.drawImage(Im1 ,205,18,184,38); } Is there a way to load all the images and then execute a block of code using several images?

    Read the article

  • Silverlight Cream for December 18, 2010 -- #1012

    - by Dave Campbell
    In this Issue: Mark Monster, Kevin Dockx, Jeremy Likness(-2-,-3-), Timmy Kokke, Den Delimarsky, Mike Snow, Samuel Jack(-2-), and Renuka Prasad(-2-). Above the Fold: Silverlight: "Trigger a Storyboard on ViewModel changes" Mark Monster WP7: "Microsoft Push Notification in Windows Phone 7" Renuka Prasad Shoutouts: SilverlightGal sent me the link to The Silverlight Dossier ... I think it's a pretty good start... additions I'd like to see are ways to submit to the various areas. Michael Crump put up a contest that runs from now to January 1st... Win a set of Infragistics Silverlight Controls with Data Visualization!... pretty cool, Michael! If you visit WynApse.com, you'll see I have a subscription to LearnVisualStudio.net... and now they have posted a batch of WP7 videos... 64 of them to be exact... wow!: New video series From SilverlightCream.com: Trigger a Storyboard on ViewModel changes Mark Monster has a great post up about triggering Storyboard on ViewModel changes using the DataTrigger from Blend... cool stuff, and you can also do GoToStateAction or other actions or build yourowndang Trigger Action... fun awaits! ... sorry it took a while to post, Mark... been a tad overloaded here! Working with the Silverlight Rich Text Box control Kevin Dockx has had a post up for a while at SilverlightShow where he takes a good look at the RichText control and it's various capabilities, including source so you can give it a dance yourself. Lessons Learned in Personal Web Page Part 3: Custom Panel and Listbox Jeremy Likness's part 3 of his Personal Web Page lessons learned is covering the tres-cool 3D Panel he did... and he's got it all explained out... building from scratch via a custom panel and a Listbox control... A Silverlight MVVM Feed Reader from Scratch in 30 Minutes Jeremy Likness has a video tutorial showing building an MVVM/Silverlight feedreader in 30 minutes ... plus a couple mods that he noticed after the fact... beat that HTML5 :) Jounce Part 8: Raising Property Changed In Jeremy Likness's latest post, he has number 8 in his series on his MVVM platform, Jounce. This time he's explaining the property changed notification, has a very cool way of doing it, and some interesting comments from readers. Dependency Injection, MVVM, Ninject and Silverlight Timmy Kokke has a great tutorial up with associated demo project on Dependency Injection in MVVM and Silverlight. Some hidden features in the Windows Phone 7 emulator Den Delimarsky shows how to get some of the hidden features on your WP7 emulator like the Call History, Call Settings, and Details about the numbers. Playing sound effects on Windows Phone 7 Mike Snow's latest tip is playing sound effects on your WP7 ... a little bit of XNA here and there, and badabing, badaboom, you got sound! Day 3 of my “Build a Windows Phone 7 game in 3 days” Challenge Samuel Jack has a couple more posts up about his 'Build a WP7 game in 3 Days' challenge... first up is Day 3 from 8:50 to 22:30 ... wow... long day! ... but he's got something good going now... some good external links also Day 3.5 of my “Build a Windows Phone 7 game in 3 days” Challenge Samuel Jack's 3rd day ended with another half-day added on to put on some finishing touches... again, some good external links... and he finished with this Say hello to Simon Squared, my 3.5 day old WP7 Game Microsoft Push Notification in Windows Phone 7 Renuka Prasad has a bunch of material up that I've not been aware of (how did that happen, people??) ... here's the first of a couple of his posts on Code Project ... a very nice tutorial on the Push Notification process... great diagrams and external links. Windows Phone 7 – Toast Notification Using Windows Azure Cloud Service Renuka Prasad has another WP7 post on CodeProject... this one on Toast Notification... and he's using Azure and WCF all rolled into it as well... great diagrams, descriptions and all the code. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Nautilus tags/labels/marks/columns for folders/files

    - by madox2
    Is there any way how to mark folders or files with tags(or labels, new columns or whatever) in Nautilus? It would be nice to sort marked folders or files through this tags. Especially my first idea was to mark folders in my Movie directory with tags seen, not seen, must see, and so on. Then I realized it would be useful in any other workspaces with any custom tags... Is there any nautilus extension for this? Or any other file manager which can do this? It might look like this:

    Read the article

  • Real-time Big Data Analytics is a reality for StubHub with Oracle Advanced Analytics

    - by Mark Hornick
    What can you use for a comprehensive platform for real-time analytics? How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud? Learn in this video what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database, and read more on their story here. Advanced analytics solutions that impact the bottom line of a business are challenging due to the range of skills and individuals involved in realizing such solutions. While we hear a lot about the role of the data scientist, that role is but one piece of the puzzle. Advanced analytics solutions also have an operationalization aspect that also requires close proximity to where the transactional activity occurs. The data scientist needs access to the right data with which to model the business problem. This involves IT for data collection, management, and administration, as well as ensuring zero downtime (a website needs to be up 24x7). This also involves working with the data scientist to keep predictive models refreshed with the latest scripts. Integrating advanced analytics solutions into enterprise apps involves not just generating predictions, but supporting the whole life-cycle from data collection, to model building, model assessment, and then outcome assessment and feedback to the model building process again. Application and web interface designers need to take into account how end users will see and use the advanced analytics results, e.g., supporting operations staff that need to handle the potentially fraudulent transactions. As just described, advanced analytics projects can be "complicated" from just a human perspective. The extent to which software can simplify the interactions among users and systems will increase the likelihood of project success. The ability to quickly operationalize advanced analytics projects and demonstrate measurable value, means the difference between a successful project and just a nice research report. By standardizing on Oracle Database and SQL invocation of R, along with in-database modeling as found in Oracle Advanced Analytics, expedient model deployment and zero downtime for refreshing models becomes a reality. Meanwhile, data scientists are also able to explore leading edge techniques available in open source. The Oracle solution propels the entire organization forward to realize the value of advanced analytics.

    Read the article

  • Summit Time!

    - by Ajarn Mark Caldwell
    Boy, how time flies!  I can hardly believe that the 2011 PASS Summit is just one week away.  Maybe it snuck up on me because it’s a few weeks earlier than last year.  Whatever the cause, I am really looking forward to next week.  The PASS Summit is the largest SQL Server conference in the world and a fantastic networking opportunity thrown in for no additional charge.  Here are a few thoughts to help you maximize the week. Networking As Karen Lopez (blog | @DataChick) mentioned in her presentation for the Professional Development Virtual Chapter just a couple of weeks ago, “Don’t wait until you need a new job to start networking.”  You should always be working on your professional network.  Some people, especially technical-minded people, get confused by the term networking.  The first image that used to pop into my head was the image of some guy standing, awkwardly, off to the side of a cocktail party, trying to shmooze those around him.  That’s not what I’m talking about.  If you’re good at that sort of thing, and you can strike up a conversation with some stranger and learn all about them in 5 minutes, and walk away with your next business deal all but approved by the lawyers, then congratulations.  But if you’re not, and most of us are not, I have two suggestions for you.  First, register for Don Gabor’s 2-hour session on Tuesday at the Summit called Networking to Build Business Contacts.  Don is a master at small talk, and at teaching others, and in just those two short hours will help you with important tips about breaking the ice, remembering names, and smooth transitions into and out of conversations.  Then go put that great training to work right away at the Tuesday night Welcome Reception and meet some new people; which is really my second suggestion…just meet a few new people.  You see, “networking” is about meeting new people and being friendly without trying to “work it” to get something out of the relationship at this point.  In fact, Don will tell you that a better way to build the connection with someone is to look for some way that you can help them, not how they can help you. There are a ton of opportunities as long as you follow this one key point: Don’t stay in your hotel!  At the least, get out and go to the free events such as the Tuesday night Welcome Reception, the Wednesday night Exhibitor Reception, and the Thursday night Community Appreciation Party.  All three of these are perfect opportunities to meet other professionals with a similar job or interest as you, and you never know how that may help you out in the future.  Maybe you just meet someone to say HI to at breakfast the next day instead of eating alone.  Or maybe you cross paths several times throughout the Summit and compare notes on different sessions you attended.  And you just might make new friends that you look forward to seeing year after year at the Summit.  Who knows, it might even turn out that you have some specific experience that will help out that other person a few months’ from now when they run into the same challenge that you just overcame, or vice-versa.  But the point is, if you don’t get out and meet people, you’ll never have the chance for anything else to happen in the future. One more tip for shy attendees of the Summit…if you can’t bring yourself to strike up conversation with strangers at these events, then at the least, after you sit through a good session that helps you out, go up to the speaker and introduce yourself and thank them for taking the time and effort to put together their presentation.  Ideally, when you do this, tell them WHY it was beneficial to you (e.g. “Now I have a new idea of how to tackle a problem back at the office.”)  I know you think the speakers are all full of confidence and are always receiving a ton of accolades and applause, but you’re wrong.  Most of them will be very happy to hear first-hand that all the work they put into getting ready for their presentation is paying off for somebody. Training With over 170 technical sessions at the Summit, training is what it’s all about, and the training is fantastic!  Of course there are the big-name trainers like Paul Randall, Kimberly Tripp, Kalen Delaney, Itzik Ben-Gan and several others, but I am always impressed by the quality of the training put on by so many other “regular” members of the SQL Server community.  It is amazing how you don’t have to be a published author or otherwise recognized as an “expert” in an area in order to make a big impact on others just by sharing your personal experience and lessons learned.  I would rather hear the story of, and lessons learned from, “some guy or gal” who has actually been through an issue and came out the other side, than I would a trained professor who is speaking just from theory or an intellectual understanding of a topic. In addition to the three full days of regular sessions, there are also two days of pre-conference intensive training available.  There is an extra cost to this, but it is a fantastic opportunity.  Think about it…you’re already coming to this area for training, so why not extend your stay a little bit and get some in-depth training on a particular topic or two?  I did this for the first time last year.  I attended one day of extra training and it was well worth the time and money.  One of the best reasons for it is that I am extremely busy at home with my regular job and family, that it was hard to carve out the time to learn about the topic on my own.  It worked out so well last year that I am doubling up and doing two days or “pre-cons” this year. And then there are the DVDs.  I think these are another great option.  I used the online schedule builder to get ready and have an idea of which sessions I want to attend and when they are (much better than trying to figure this out at the last minute every day).  But the problem that I have run into (seems this happens every year) is that nearly every session block has two different sessions that I would like to attend.  And some of them have three!  ACK!  That won’t work!  What is a guy supposed to do?  Well, one option is to purchase the DVDs which are recordings of the audio and projected images from each session so you can continue to attend sessions long after the Summit is officially over.  Yes, many (possibly all) of these also get posted online and attendees can access those for no extra charge, but those are not necessarily all available as quickly as the DVD recording are, and the DVDs are often more convenient than downloading, especially if you want to share the training with someone who was not able to attend in person. Remember, I don’t make any money or get any other benefit if you buy the DVDs or from anything else that I have recommended here.  These are just my own thoughts, trying to help out based on my experiences from the 8 or so Summits I have attended.  There is nothing like the Summit.  It is an awesome experience, fantastic training, and a whole lot of fun which is just compounded if you’ll take advantage of the first part of this article and make some new friends along the way.

    Read the article

  • SOA online seminar by Griffiths Waite &ndash; adopt Fusion Applications patterns today

    - by Jürgen Kress
    Our SOA Specialized partner Griffiths Waite developed a series of Oracle Fusion Middleware online seminars. Mark Simpson Oracle ACE Director gives an insight of the Oracle strategy, how Oracle is using Fusion Middleware to build Fusion Applications and how you can profit in your project from the Fusion Architecture. Giving examples how customers can adopt use cases for Application Integration & Composite Application Portals & Application Modernization & Business Process Management. If you are interested make sure you watch the online seminar and take the SOA Maturity Assessment For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: Mark Simpson,Griffiths Waite,Fusion Middleware,Fusion Applications,SOA,Oracle,SOA Community,OPN,SOA Specialization,Specialization,Jürgen Kress

    Read the article

  • How can I make a permanently updated copy of a file in a different place to the original file?

    - by Mark
    I use two computers, a Linux one for coding and building and a Windows one which has the programming application to load the built program onto the hardware. Both computers have access to a network drive which I use to pass the files from Linux to Windows. My problem is, that every time I build I have to copy the files from where the are created to the network drive. How can I make some sort of file in the network drive on Ubuntu that always mirrors the file which is built in the different location, like a pointer? Thanks!

    Read the article

  • Fix for EF4 Profiler Issue Coming in next Cumulative Update

    - by Ajarn Mark Caldwell
    Hey!  What do you know?  Microsoft Connect really works! I was very happy this morning to open my email and find a notice from Umachandar on the SQL Programmability Team that they have created a fix for the Odd Profiler Results with EF4 issue that I wrote about last June.  Not only did I blog about it, but I logged an item to Connect with repro steps and sample code.  And now, they have announced that they have a fix for this problem and that it will be included in the next Cumulative Update for SQL Server 2008 R2. For those of you not running 2008 R2, or who prefer to wait for full Service Packs rather than install the latest Cumulative Updates, I also wrote about a workaround for the issue, as long as you do not require the Multiple Active Result Sets feature to be enabled. It is easy with Microsoft to get the feeling that you’re just shouting in the wind, and it is nice to get validation once in a while that they really are listening.

    Read the article

  • Get Smarter Just By Listening

    - by mark.wilcox
    Occasionally my friends ask me what do I listen/read to keep informed. So I thought I would like to post an update. First - there is an entirely new network being launched by Jason Calacanis called "ThisWeekIn". They have weekly shows on variety of topics including Startups, Android, Twitter, Cloud Computing, Venture Capital and now the iPad. If you want to keep ahead (and really get motivated) - I totally recommend listening to at least This Week in Startups. I also find Cloud Computing helpful. I also like listening to the Android show so that I can see how it's progressing. Because while I love my iPhone/iPad - it's  important to keep the competition in the game up to improve everything. I'm also not opposed to switching to Android if something becomes as nice experience - but so far - my take on Android devices are  - 10 years ago, I would have jumped all over them because of their hackability. But now, I'm in a phase, where I just want these devices to work and most of my creation is in non-programming areas - I find the i* experience better. Second - In terms of general entertaining tech news - I'm a big fan of This Week in Tech. Finally - For a non-geek but very informative show - The Kevin Pollack Show on ThisWeekIn network gets my highest rating. It's basically two-hours of in-depth interview with a wide variety of well-known comedian and movie stars. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • Can't complete dropbox installation from behind proxy

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • Configuration "diff" across Oracle WebCenter Sites instances

    - by Mark Fincham-Oracle
    Problem Statement With many Oracle WebCenter Sites environments - how do you know if the various configuration assets and settings are in sync across all of those environments? Background At Oracle we typically have a "W" shaped set of environments.  For the "Production" environments we typically have a disaster recovery clone as well and sometimes additional QA environments alongside the production management environment. In the case of www.java.com we have 10 different environments. All configuration assets/settings (CSElements, Templates, Start Menus etc..) start life on the Development Management environment and are then published downstream to other environments as part of the software development lifecycle. Ensuring that each of these 10 environments has the same set of Templates, CSElements, StartMenus, TreeTabs etc.. is impossible to do efficiently without automation. Solution Summary  The solution comprises of two components. A JSON data feed from each environment. A simple HTML page that consumes these JSON data feeds.  Data Feed: Create a JSON WebService on each environment. The WebService is no more than a SiteEntry + CSElement. The CSElement queries various DB tables to obtain details of the assets/settings returning this data in a JSON feed. Report: Create a simple HTML page that uses JQuery to fetch the JSON feed from each environment and display the results in a table. Since all assets (CSElements, Templates etc..) are published between environments they will have the same last modified date. If the last modified date of an asset is different in the JSON feed or is mising from an environment entirely then highlight that in the report table. Example Solution Details Step 1: Create a Site Entry + CSElement that outputs JSON Site Entry & CSElement Setup  The SiteEntry should be uncached so that the most recent configuration information is returned at all times. In the CSElement set the contenttype accordingly: Step 2: Write the CSElement Logic The basic logic, that we repeat for each asset or setting that we are interested in, is to query the DB using <ics:sql> and then loop over the resultset with <ics:listloop>. For example: <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> "templates": [ <ics:listloop listname="TemplateList"> {"name":"<ics:listget listname="TemplateList"  fieldname="name"/>", "modified":"<ics:listget listname="TemplateList"  fieldname="updateddate"/>"}, </ics:listloop> ], A comprehensive list of SQL queries to fetch each configuration asset/settings can be seen in the appendix at the end of this article. For the generation of the JSON data structure you could use Jettison (the library ships with the 11.1.1.8 version of the product), native Java 7 capabilities or (as the above example demonstrates) you could roll-your-own JSON output but that is not advised. Step 3: Create an HTML Report The JavaScript logic looks something like this.. 1) Create a list of JSON feeds to fetch: ENVS['dev-mgmngt'] = 'http://dev-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json'; ENVS['dev-dlvry'] = 'http://dev-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-mngmnt'] = 'http://test-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-dlvry'] = 'http://test-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';   2) Create a function to get the JSON feeds: function getDataForEnvironment(url){ return $.ajax({ type: 'GET', url: url, dataType: 'jsonp', beforeSend: function (jqXHR, settings){ jqXHR.originalEnv = env; jqXHR.originalUrl = url; }, success: function(json, status, jqXHR) { console.log('....success fetching: ' + jqXHR.originalUrl); // store the returned data in ALLDATA ALLDATA[jqXHR.originalEnv] = json; }, error: function(jqXHR, status, e) { console.log('....ERROR: Failed to get data from [' + url + '] ' + status + ' ' + e); } }); } 3) Fetch each JSON feed: for (var env in ENVS) { console.log('Fetching data for env [' + env +'].'); var promisedData = getDataForEnvironment(ENVS[env]); promisedData.success(function (data) {}); }  4) For each configuration asset or setting create a table in the report For example, CSElements: 1) Get a list of unique CSElement names from all of the returned JSON data. 2) For each unique CSElement name, create a row in the table  3) Select 1 environment to represent the master or ideal state (e.g. "Everything should be like Production Delivery") 4) For each environment, compare the last modified date of this envs CSElement to the master. Highlight any differences in last modified date or missing CSElements. 5) Repeat...    Appendix This section contains various SQL statements that can be used to retrieve configuration settings from the DB.  Templates  <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> CSElements <ics:sql sql="SELECT name,updateddate FROM CSElement WHERE status != 'VO'" listname="CSEList" table="CSElement" /> Start Menus <ics:sql sql="select sm.id, sm.cs_name, sm.cs_description, sm.cs_assettype, sm.cs_assetsubtype, sm.cs_itemtype, smr.cs_rolename, p.name from StartMenu sm, StartMenu_Sites sms, StartMenu_Roles smr, Publication p where sm.id=sms.ownerid and sm.id=smr.cs_ownerid and sms.pubid=p.id order by sm.id" listname="startList" table="Publication,StartMenu,StartMenu_Roles,StartMenu_Sites"/>  Publishing Configurations <ics:sql sql="select id, name, description, type, dest, factors from PubTarget" listname="pubTargetList" table="PubTarget" /> Tree Tabs <ics:sql sql="select tt.id, tt.title, tt.tooltip, p.name as pubname, ttr.cs_rolename, ttsect.name as sectname from TreeTabs tt, TreeTabs_Roles ttr, TreeTabs_Sect ttsect,TreeTabs_Sites ttsites LEFT JOIN Publication p  on p.id=ttsites.pubid where p.id is not null and tt.id=ttsites.ownerid and ttsites.pubid=p.id and tt.id=ttr.cs_ownerid and tt.id=ttsect.ownerid order by tt.id" listname="treeTabList" table="TreeTabs,TreeTabs_Roles,TreeTabs_Sect,TreeTabs_Sites,Publication" />  Filters <ics:sql sql="select name,description,classname from Filters" listname="filtersList" table="Filters" /> Attribute Types <ics:sql sql="select id,valuetype,name,updateddate from AttrTypes where status != 'VO'" listname="AttrList" table="AttrTypes" /> WebReference Patterns <ics:sql sql="select id,webroot,pattern,assettype,name,params,publication from WebReferencesPatterns" listname="WebRefList" table="WebReferencesPatterns" /> Device Groups <ics:sql sql="select id,devicegroupsuffix,updateddate,name from DeviceGroup" listname="DeviceList" table="DeviceGroup" /> Site Entries <ics:sql sql="select se.id,se.name,se.pagename,se.cselement_id,se.updateddate,cse.rootelement from SiteEntry se LEFT JOIN CSElement cse on cse.id = se.cselement_id where se.status != 'VO'" listname="SiteList" table="SiteEntry,CSElement" /> Webroots <ics:sql sql="select id,name,rooturl,updatedby,updateddate from WebRoot" listname="webrootList" table="WebRoot" /> Page Definitions <ics:sql sql="select pd.id, pd.name, pd.updatedby, pd.updateddate, pd.description, pdt.attributeid, pa.name as nameattr, pdt.requiredflag, pdt.ordinal from PageDefinition pd, PageDefinition_TAttr pdt, PageAttribute pa where pd.status != 'VO' and pa.id=pdt.attributeid and pdt.ownerid=pd.id order by pd.id,pdt.ordinal" listname="pageDefList" table="PageDefinition,PageAttribute,PageDefinition_TAttr" /> FW_Application <ics:sql sql="select id,name,updateddate from FW_Application where status != 'VO'" listname="FWList" table="FW_Application" /> Custom Elements <ics:sql sql="select elementname from ElementCatalog where elementname like 'CustomElements%'" listname="elementList" table="ElementCatalog" />

    Read the article

  • Oracle's Vision for the Social-Enabled Enterprise

    - by Peggy Chen
    Register Now Join us for the Webcast. Mon., Sept. 10, 2012 10 a.m. PT / 1 p.m. ET Join the conversation: #oracle and #socbiz Mark Hurd President, Oracle Thomas Kurian Executive Vice President, Product Development, Oracle Reggie Bradford Senior Vice President, Product Development, Oracle Dear Colleague, Smart companies are developing social media strategies to engage customers, gain brand insights, and transform employee collaboration and recruitment. Oracle is powering this transformation with the most comprehensive enterprise social platform that lets you: Monitor and engage in social conversations Collect and analyze social data Build and grow brands through social media Integrate enterprisewide social functionality into a single system Create rich social applications Join Oracle President Mark Hurd and senior Oracle executives to learn more about Oracle’s vision for the social-enabled enterprise. Register now for this Webcast. Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • How was Git designed?

    - by Mark Canlas
    My workplace recently switched to Git and I've been loving (and hating!) it. I really do love it, and it is extremely powerful. The only part I hate is that sometimes it's too powerful (and maybe a bit terse/confusing). My question is... How was Git designed? Just using it for a short amount of time, you get the feel that it can handle many obscure workflows that other version control systems could not. But it also feels elegant underneath. And fast! This is no doubt in part to Linus's talent. But I'm wondering, was the overall design of git based off of something? I've read about BitKeeper but the accounts are scant on technical details. The compression, the graphs, getting rid of revision numbers, emphasizing branching, stashing, remotes... Where did it all come from? Linus really knocked this one out of the park and on pretty much the first try! It's quite good to use once you're past the learning curve.

    Read the article

  • Email sent via Google via relayhost being marked as spam

    - by Mark H
    Company email hosted by Google Apps. Company PBX in-house is Elastix. All voicemails received on the extensions of Elastix are supposed to be emailed by the CentOS server (Postfix) to the email address of the employee. Using relayhost on postfix, I am sending those emails through Google Apps (smtp.gmail.com), but some of these voicemail emails end up in the spam. Sending it through Google, and sending it to an email hosted by Google - yet there's spam. Email sent from the Google Apps interface - no complaints of it going to spam - just from the Elastix server. I've just asked our DNS domain guys to add spf records, but is that all that's needed? Some help please!

    Read the article

  • June 17, 2010 Webcast - 5 Security Tips To Reduce Cost Using Oracle Directory Services

    - by mark.wilcox
    We're delivering another webcast on June 17 (next week!): 5 Security Tips To Reduce Cost Using Oracle Directory Services  Organizations with business units spread around the world face costly and time consuming security concerns. However, many of these companies are forced to deal with increased scrutiny and security demands while resources are reduced. This live webcast focuses on concrete ways IT organizations can use directory services to do more with less.  Posted via email from Virtual Identity Dialogue

    Read the article

  • SharePoint Saturday DC

    - by Mark Rackley
    Wow… did you see this thing? 927 attendees? An exhibition hall full of vendors? 94 speakers? 100 sessions?? Insane is a word that comes to mind… SharePoint Saturday DC was definitely epic as far as SharePoint Saturdays go. I got to catch up with a lot of friends and make some new ones.  Met a couple of fans of the blog (hello ladies…;))  Did you know that people actually read this thing? I guess that means I need to stop putting so much garbage on here and more content. I’ll get right on that as soon as I find out how to add 6 hours to each day. Anyway, once again I did my “Wrapping Your Head Around the SharePoint Beast” session.  I tweaked it even more from Huntsville and presented to a packed room with some people sitting on the floor and standing in the aisles. It was a great crowd, very interactive and they seemed interested at least. Thank you guys so much for attending and please feel free to tell me of any suggestions you have to make the presentation better.  This is one of the presentations that will probably never die. Everyone beginning SharePoint development needs a good introduction and starting point. My goal is to make this THE session to see on the subject. So, a little interesting data about my class. Half of the room was brand new to SharePoint and only one person was using 2010. That tells me that this session still has legs and that 2007 isn’t going anywhere anytime soon.  I know my organization will be using 2007 for at least a couple more years. Oh yeah… the slide deck?  Here it is: SharePoint Saturday DC Slide Deck So, SharePoint Saturday was truly tremendous and if you weren’t there you missed out. @meetdux, @usher, and the rest of their crew did a spectacular job. You guys rock and are a huge asset to the community. Thanks for allowing me to speak. What’s up next for me?  I’m so glad you asked…. SHAREPOINT SATURDAY OZARKS IS JUNE 12TH! Although SharePoint Saturday Ozarks on June 12 in Harrison Arkansas will be a much more intimate event than DC, it promises to be a most memorable event. We’ve got over 30 speakers and sessions, some cool stuff to give away, and we’re going floating down the Buffalo River on the 13th. Let’s see you do THAT in DC.  :) Anyway, I hope to see you there and I would truly appreciate any help you can do to help publicize the event. We just got internet here in the hills and most people here are still looking for the “any” key….

    Read the article

  • How should I incorporate a hotfix back into a feature branch using gitflow?

    - by Mark Trapp
    I've started using gitflow for a project, and I have an outstanding feature branch as well as a newly created hotfix. Per the gitflow workflow, the hotfix gets applied to both the master and develop branches, but nothing is said or done about extant feature branches. Nevertheless, I'd like to incorporate the hotfix changes back into my feature branch, which as near as I can tell leaves three options: Don't incorporate the changes. If the changes were needed for the feature branch, it should've been part of the feature branch. Merge develop back into the feature branch. This seems to follow the gitflow workflow the best, but would cause out-of-order commits. Rebase the feature branch onto develop. This would preserve commit order but rebasing seems to be completely absent from the general gitflow workflow. What's the best practice here?

    Read the article

  • Oracle Magazine - OWB 11gR2 and Heterogeneous Databases

    - by David Allan
    There's a nice article titled 'Oracle Warehouse Builder 11g Release 2 and Heterogeneous Databases' from Oracle ACE director and cofounder of Rittman Mead Consulting, Mark Rittman in the May/June 2010 Oracle Magazine that covers the heterogeneous database support in OWB 11gR2: http://www.oracle.com/technology/oramag/oracle/10-may/o30bi.html Big thanks to Mark for this write up. There is an Oracle white paper on the support here and for examples of this extensibility you can go to the OWB blog archive where there are quite a few posts. I would recommend the following interesting posts out of the archive architecture overview, bulk file loading, MySQL open connectivity and MySQL bulk extract as interesting posts amongst others.

    Read the article

  • How to change the default editor of a specific file type in JDeveloper

    - by [email protected]
    When you open a file in JDeveloper, the mode that is used as the default might not be what you as a developer want.  If, for example, every time you open a .jsp(x) file you click on the source tab at the bottom of the window so that you can edit the jsp(x) file in source code mode, you may want to consider changing the default editor for that file type.  This is easy to do in the JDeveloper tool preferences and can be a time saver in the long run, since some editors can take a while to start up and if you don't need them often, this would just be lost time.  Here are the steps:  From the JDeveloper menu, select Tools->Preferences...Select "File Types" in the tree component on the left side of the preferences dialog.Click on the "Default Editors" tab.Scroll to the file type you want to change.In the details section at the bottom of the dialog, use the "Default Editor" select list to change the default to your liking.

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • How to use MythBuntu to send TV signal to a 2nd frontend

    - by Mark Preston
    I guess the a MythTV or MythBuntu backend acts as a "server" for the frontends. I have MythBuntu installed. It runs fine, I can tune live TV, hear the sound, etc. To get this to work, I had to config the Wired Network IP4V settings to Method: Link-Local Only. The Local Backend IP address is: 127.0.0.1 and the info (bottom of screen) says that if there is another frontend, that this IP add. must be changed. 1 - Does this mean changed to the IP address of the 2nd frontend? 2 - What "Method" do I use to make 2 or more frontends? 3 - I have an ethernet switch which currently "sees" the tv signal, sends it to the computer's ethernet port where Mythbuntu makes use of it. 4 - How do I set up the Myth to send it's output (the tv shows) to both televisions? If you know of a How-To, or website, please give the URL or identifying keywords.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >