Search Results

Search found 5718 results on 229 pages for 'apply'.

Page 107/229 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Beat the Post-Holiday Blues with a dose of BIWA

    - by mdonohue
    You know its coming so why not plan ahead.  Come and join like minded professionals at the BIWA Summit 2013 Early Bird Registration ends December 14th for BIWA Summit 2013. This event, focused on Business Intelligence, Data Warehousing and Analytics, is hosted by the BIWA SIG of the IOUG on January 9 and 10, at the Hotel Sofitel, near Oracle headquarters in Redwood City, California. Be sure to check out the many featured speakers, including Oracle executives Balaji Yelamanchili, Vaishnavi Sashikanth, and Tom Kyte, and Ari Kaplan, sports analyst, as well as the many other speakers. Hands-on labs will give you the opportunity to try out much of the Oracle software for yourself--be sure to bring a laptop capable of running Windows Remote Desktop. Check out the Schedule page for the list of over 40 sessions on all sorts of BIWA-related topics. See the BIWA Summit 2013 web site for details and be sure to register soon, while early bird rates still apply. Klaus and Nikos will be presenting the ever popular Getting the Best Performance from your Business Intelligence Publisher Reports and Implementation and we will run 2 sessions of the BI Publisher Hands On Lab for building Reports and Data Models. Hope to see you there.

    Read the article

  • Can't login to Guest Session or Standard Account?

    - by Rory
    Thanks in advance for any help. Ubuntu 12.04 Up until recently my mother has been using the Guest Session when she logs on; now, the Guest Session will not login. When I try to login to it, it bounces me back to the login screen. I then tried to login on a Standard account (Alberta) which I made prior to not being able to login to Guest, and it turns out I cannot login - it gives the "Invalid Password" error. So then I tried to change her password from my own account (Rory) which is the master account. Under the Login Options where the Password option is, it says "Account disabled" and it will not let me change this; I try to apply a password, get no error, but it just still says "Account disabled". Then I tried to delete the Alberta account altogether, but got this error: I also tried sudo userdel -r Alberta but got this error: "userdel: cannot lock /etc/shadow; try again later." I don't know what to do now. Any help appreciated.

    Read the article

  • How to set TextureFilter to Point to make example Bloom filter work?

    - by Mr Bell
    I have simple app that renders some particles and now I am trying to apply the bloom shader from the xna samplers ( http://create.msdn.com/en-US/education/catalog/sample/bloom ) to it, but I am running into this exception: "XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4." When the BloomComponent tries to end the sprite batch in the DrawFullscreenQuad method: spriteBatch.Begin(0, BlendState.Opaque, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(texture, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); //<------- Exception thrown here It seems to be related to the pixel shaders that I am using to animate the particle. In a nutshell, I have a texture2d in vector4 format that holds particle positions, and another one for velocities. Here is a snippet from that area: GraphicsDevice.SetRenderTarget(tempRenderTarget); animationEffect.CurrentTechnique = animationEffect.Techniques[technique]; spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointWrap, DepthStencilState.DepthRead, RasterizerState.CullNone, animationEffect); spriteBatch.Draw(randomValues, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); What I comment out the code that calls the particle animation pixel shaders the bloom component runs fine. Is there some state that I need to reset to make the bloom work?

    Read the article

  • Oracle SCM at APICS Denver Oct 14-16

    - by Stephen Slade
    Join us in Denver, October 14–16, 2012, for the 2012 APICS International Conference & Expo. One of the world's largest gatherings of supply chain and operations management professionals, APICS provides an annual interactive learning environment for operations and supply chain professionals to lead and apply best practices. For those of you considering attending APICS  next month, be sure to keep Oracle Supply Chain applications on your radar. Oracle will again have a prominent position at the annual global conference. Our product booth with have supply chain demonstrations for manufacturing, value chain planning, value chain execution and Agile product lifecycle management offerings. Stop by our booth to register for one of numerous prizes and awards and chat with one of our supply chain product experts. Oracle customers will be presenting at various sessions throughout the event.  One of the great stories to be shared is the SUN supply chain transformation. For those interested in moving costs down to the bottom line, this is the session you should attend. http://www.apics.org/sites/conference/2012/home

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • Best way to convert existing project to be open source in GitHub

    - by Tom
    I've been working on a personal closed source project for some time and would like to make it open source. I've never created my own open source project before so it will be a good learning experience. I have been using GitHub as source control, so once I've written some decent docs on how to use and develop for it etc, it should be as simple as switching the repo to be public right? I guess my main question is around licencing. I was thinking of going with Apache 2.0 licence just because it seems to be widely used. It requires the licence header to be attached to all the source files, but if I do that now then all the other commits in the past will have it missing. Does that mean some one could pull an earlier version and it wouldn't have a licence? Is it best to start a new repo with the initial commit containing all the code with licence headers? Or maybe is there some advanced Git functionality that allows me to apply the licence header to all existing commits some how? Cheers.

    Read the article

  • Clean Code Developer & Certification in IT - MSCC 21.09.2013

    It was a very busy weekend this time, and quite some hectic to organise the second meetup on a Saturday for the Mauritius Software Craftsmanship Community (MSCC) but it was absolutely fun. Following, I'm writing a brief summary about the topics we spoke about and the new impulses I got. "What a meetup... I was positively impressed. At the beginning I thought that noone would actually show up but then by the time the room got filled. Lots of conversation, great dialogues and fantastic networking between fresh students, experienced students, experienced employees, and self-employed attendees. That's what community is all about!" Above quote was my first reaction shortly after the gathering. And despite being busy during the weekend and yesterday, I took my time to reflect a little bit on things happened and statements made before writing it here on my blog. Additionally, I was also very curious about possible reactions and blogs from other attendees. Reactions from other craftsmen Let me quickly give you some links and quotes from others first... "Like Jochen posted on facebook, that was indeed a 5+ hours marathon (maybe 4 hours for me but still) … Wohoo! We’re indeed a bunch of crazy geeks who did not realise how time flew as we dived into the myriad discussions that sprouted. Yet in the end everyone was happy (:" -- Ish on MSCC meetup - The marathon (: "And the 4hours spent @ Talking drums bore its fruit..I was doing something I never did before....reading the borrowed book while walking....and though I was not that familiar with things mentionned in the book...I was skimming,scanning & flipping...reading titles...short paragraphs...and I skipped pages till I reached home." -- Yannick on Mauritius Software Craftsmanship 1st Meet-up "Hi Developers, Just wanted to share with you the meetups i attended last Saturday - [...] - The second meetup is the one hosted by Jochen Kirstätter, the MSCC, where the attendees were Craftsman, no woman, this time - all sharing the same passion of being a developer - even though it is on different platforms(Windows - Windows Phone - Linux - Adobe(yes a designer) - .Net) - but we manage to sit at the same table - sharing developer views and experience in the corporate world - also talking about good practice when coding( where Jochen initiated a discussion on Clean Coding ) i could not stay till the end - but from what i have heard - the longer you stay the more fun you have till 1600. Developers in the Facebook grouping i invite you to stay tuned about the various developer communities popping up - where you can come to share and learn good practices, develop the entrepreneurial spirit, and learn and share your passion about technologies" -- Arnaud on Facebook More feedback has been posted on the event directly. So, should I really write more? Wouldn't that spoil the impressions? Starting the day with a surprise Indeed, I was very pleased to stumble over the existence of Mobile Monday Mauritius on LinkedIn, an association about any kind of mobile app development, mobile gadgets and latest smartphones on the market. Despite the Monday in their name they had scheduled their recent meeting on Saturday between 10:00 and 12:00hrs. Wow, what a coincidence! Let's grap the bull by its horns and pay them an introductory visit. As they chose the Ebene Accelerator at the Orange Tower in Ebene it was a no-brainer to leave home a bit earlier and stop by. It was quite an experience and fun to talk to the geeks over there. Really looking forward to organise something together.... Arriving at the venue As the children got a bit uneasy at the MoMo gathering and I didn't want to disturb them too much, we arrived early at Bagatelle. Well, no problems as we went for a decent breakfast at Food Lover's Market. Shortly afterwards we went to our venue location, Talking Drums, and prepared the room for the meeting. We only had to take off a repro-painting of the wall in order to have a decent area for the projector. All went very smooth and my two little ones were of great help. Just in time, our first craftsman Avinash arrived on the spot. And then the waiting started... Luckily, not too long. Bit by bit more and more IT people came to join our meeting. Meanwhile, I used the time to give a brief introduction about the MSCC in general, what we are (hm, maybe I am) trying to achieve and that the recent phase is completely focused on creating more awareness that a community like the MSCC is active here in Mauritius. As soon as we reached some 'critical mass' of about ten people I asked everyone for a short introduction and bio, just in case... Conversation between participants started to kick in and we were actually more networking than having a focus on our topics of the day. Quick updates on latest news and development around the MSCC Finally, Clean Code Developer No matter how the position is actually called, whether it is Software Engineer, Software Developer, Programmer, Architect, or Craftsman, anyone working in IT is facing almost the same obstacles. As for the process of writing software applications there are re-occurring patterns and principles combined with some common exercise and best practices on how to resolve them. Initiated by the must-read book 'Clean Code' by Robert C. Martin (aka Uncle Bob) the concept of the Clean Code Developer (CCD) was born already some years ago. CCD is much likely to traditional martial arts where you create awareness of certain principles and learn how to apply practices to improve your style. The CCD initiative recommends to indicate your level of knowledge and experience with coloured wrist bands - equivalent to the belt colours - for various reasons. Frankly speaking, I think that the biggest advantage here is provided by the obvious recognition of conceptual understanding. For example, take the situation of a team meeting... A member with a higher grade in CCD, say Green grade, sees that there are mainly Red grades to talk to, and adjusts her way of communication to their level of understanding. The choice of words might change as certain elements of CCD are not yet familiar to all team members. So instead of talking in an abstract way which only Green grades could follow the whole scenario comes down to Red grade level. Different story, better results... Similar to learning martial arts, we only covered two grades during this occasion - black and red. Most interestingly, there was quite some positive feedback and lots of questions about the principles and practices of the red grade. And we gathered real-world examples from various craftsman and discussed them. Following the Clean Code Developer Red Grade and some annotations from our meetup: CCD Red Grade - Principles Don't Repeat Yourself - DRY Keep It Simple, Stupid (and Short) - KISS Beware of Optimisations! Favour Composition over Inheritance - FCoI Interestingly most of the attendees already heard about those key words but couldn't really classify or categorize them. It's very similar to a situation in which you do not the particular for a thing and have to describe it to others... until someone tells you the actual name and suddenly all is very simple. CCD Red Grade - Practices Follow the Boy Scouts Rule Root Cause Analysis - RCA Use a Version Control System Apply Simple Refactoring Pattern Reflect Daily Introduction to the principles and practices of Clean Code Developer - here: Red Grade As for the various ToDo's we commonly agreed that the Boy Scout Rule clearly is not limited to software development or IT administration but applies to daily life in general. Same for the root cause analysis, btw. We really had good stories with surprisingly endings and conclusions. A quick check about who is using a version control system brought more drive into the conversation. Not only that we had people that aren't using any VCS at all, we also had the 'classic' approach of backup folders and naming conventions as well as the VCS 'junkie' that has to use multiple systems at a time. Just for the records: Git and GitHub seem to be in favour of some of the attendees. Regarding the daily reflection at the end of the day we came up with an easy solution: Wrap it up as a blog entry! Certifications in IT This is kind of a controversy in IT in general. Is it interesting to go for certifications or are they completely obsolete? What are the possibilities to get certified? What are the options we have in Mauritius? How would certificates stand compared to other educational tracks like Computer Science or Web Design. The ratio between craftsmen with certifications like MCP, MSTS, CCNA or LPI versus the ones without wasn't in favour for the first group but there was a high interest in the topic itself and some were really surprised to hear that exam preparations are completely free available online including temporarily voucher codes for either discounts or completely free exams. Furthermore, we discussed possible options on forming so-called study groups on a specific certificates and organising more frequent meetups in order to learn together. Taking into consideration that we have sponsored access to the video course material of Pluralsight (and now PeepCode as well as TrainSignal), we might give it a try by the end of the year. Current favourites are LPIC Level 1 and one of the Microsoft exams 40-78x. Feedback and ideas for the MSCC The closing conversations and discussions about how the MSCC is recently doing, what are the possibilities and what's (hopefully) going to happen in the future were really fertile and I made a couple of mental bullet points which I'm looking forward to tackle down together with orher craftsmen. Eventually, it might be a good option to elaborate on some issues during our weekly Code & Coffee sessions one Wednesday morning. Active discussion on various IT topics like certifications (LPI, MCP, CCNA, etc) and sharing experience Finally, we made it till the end of the planned time. Well, actually the talk was still on and we continued even after 16:00hrs. Unfortunately, we (the children and I) had to leave for evening activities. My resume of the day... It was great to have 15 craftsmen in one room. There are hundreds of IT geeks out there in Mauritius, and as Mauritius Software Craftsmanship Community we still have a lot of work to do to pass on the message to some more key players and companies. Currently, it seems that we are able to attract a good number of students in Computer Science... but we have a lot more to offer, even or especially for IT people on the job. I'm already looking forward to our next Saturday meetup in the near future. PS: Meetup pictures are courtesy of Nirvan Pagooah. Thanks for sharing...

    Read the article

  • Should a Python programmer learn Ruby?

    - by C J
    Hi! I have been a Python programmer for around 1.5 years (one internship + side projects), so I am comfortable with the language. Given that everyone is talking about Ruby these days, and I mean seriously! No one bothers about Python (from what I've seen). See GitHub. All RoR. I apply for a job and they ask me about RoR. I look at the screencasts on peepcode.com and they are in Ruby. gitimmersion.com has all the tutorial in Ruby! I know this is pretty vague, but still... why Ruby! Everyone these days is obssessed with RoR! Why not Python? Anyways, my questions are: Should I learn Ruby? Is learning Ruby when knowing Python be, er, complicated for me? Or is it going to be just like learning any other language? Thanks!

    Read the article

  • OBJECT_Name parameters and dbid

    - by steveh99999
    If you've been using SQL Server for a long time, you may have been used to using the OBJECT_NAME system function in the past - especially useful when converting table IDs into table names when querying sysobjects and sysindexes..... However, if you're an old-school DBA  - did you know since SQL 2005 service pack 2 it  accepts a  second parameter ? database_id.. For example, this can be used to summarize some useful information from sys.dm_exec_query_stats. When reviewing SQL Server performance - it can be useful to look at the most heavily used stored procedures rather than inefficient less frequently used procedures.  Here's a query to summarize performance data on the most-heavily used stored procedures across all databases on a server  :-SELECT TOP 20 DENSE_RANK() OVER (ORDER BY SUM(execution_count) DESC) AS rank, OBJECT_NAME(qt.objectid, qt.dbid) AS 'proc name', (CASE WHEN qt.dbid = 32767 THEN 'mssqlresource' ELSE DB_NAME(qt.dbid) END ) AS 'Database', OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid) AS 'schema', SUM(execution_count) AS 'TotalExecutions',SUM(total_worker_time) AS 'TotalCPUTimeMS', SUM(total_elapsed_time) AS 'TotalRunTimeMS', SUM(total_logical_reads) AS 'TotalLogicalReads',SUM(total_logical_writes) AS 'TotalLogicalWrites', MIN(creation_time) AS 'earliestPlan', MAX(last_execution_time) AS 'lastExecutionTime' FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE OBJECT_NAME(qt.objectid, qt.dbid) IS NOT NULL GROUP BY OBJECT_NAME(qt.objectid, qt.dbid),qt.dbid,OBJECT_SCHEMA_NAME(qt.objectid,qt.dbid)      

    Read the article

  • BlitzMax - generating 2D neon glowing line effect to png file

    - by zanlok
    Originally asked on StackOverflow, but it became tumbleweed. I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game. Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve. The problems I've been having are: Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated. Using just 2D calls could be advantageous, simpler to understand and re-use. It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff. p.s. I've reviewed this post (and many many others on the Blitz site), have samples working, and even developed my own 5x5 frag shaders. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well. I'd rather understand how to apply shading to a 2D scene, especially using the specifics of BlitzMax.

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    Hi. I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D -TheCommieDuck

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Is it a "pattern smell" to put getters like "FullName" or "FormattedPhoneNumber" in your model?

    - by DanM
    I'm working on an ASP.NET MVC app, and I've been getting into the habit of putting what seem like helpful and convenient getters into my model/entity classes. For example: public class Member { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string PhoneNumber { get; set; } public string FullName { get { return FirstName + " " + LastName; } } public string FormattedPhoneNumber { get { return "(" + PhoneNumber.Substring(0, 3) + ") " + PhoneNumber.Substring(3, 3) + "-" + PhoneNumber.Substring(6); } } } I'm wondering people think about the FullName and FormattedPhoneNumber getters. They make it very easy to create standardized data formats throughout the app, and they seem to save a lot of repeated code, but it could definitely be argued that data format is something that should be handled in mapping from model to view-model. In fact, I was originally applying these data formats in my service layer where I do my mapping, but it was becoming a burden to constantly have to write formatters then apply them in many different places. E.g., I use "Full Name" in most views, and having to type something like model.FullName = MappingUtilities.GetFullName(entity.FirstName, entity.LastName); all over the place seemed a lot less elegant than just typing model.FullName = entity.FullName (or, if you use something like AutoMapper, potentially not typing anything at all). So, where do you draw the line when it comes to data formatting. Is it "okay" to do data formatting in your model or is that a "pattern smell"? Note: I definitely do not have any html in my model. I use html helpers for that. I'm strictly talking about formatting or combining data (and especially data that is frequently used).

    Read the article

  • Quaternion Camera

    - by Alex_Hyzer_Kenoyer
    Can someone help me figure out how to use a Quaternion with the PerspectiveCamera in libGDX or in general? I am trying to rotate my camera around a sphere that is being drawn at (0,0,0). I am not sure how to go about setting up the quaternion correctly, manipulating it, and then applying it to the camera. Edit: Here is what I have tried to do so far. // This is how I set it up Quaternion orientation = new Quaternion(); orientation.setFromAxis(Vector3.Y, 45); // This is how I am trying to update the rotations public void rotateX(float amount) { Quaternion temp = new Quaternion(); temp.set(Vector3.X, amount); orientation.mul(temp); } public void rotateY(float amount) { Quaternion temp = new Quaternion(); temp.set(Vector3.Y, amount); orientation.mul(temp); } public void updateCamera() { // This is where I am unsure how to apply the rotations to the camera // I think I should update the view and projection matrices? camera.view.mul(orientation); ... }

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Financial Statement Presentation Changes

    - by Theresa Hickman
    On March 10, 2010, FASB and IASB came to an agreement on financial statement presentation. They have been discussing changes for some time, such as displaying more lines items and moving certain line items from equity to the P&L, and now it seems they have finally come to a joint decision and put it in writing. I recently learned that there will be a trend to book nothing to equity and to convey everything in the P&L to take away any facility for companies to hide losses from its shareholders, investors, etc. (I'm exaggerating when I say book nothing to equity. Obviously, those items that already live there, such as stocks and dividends, would stay there.) But accounts like your CTA (Cumulative Translation Adjustment account) used to plug the gain or loss from equity translation would move from the equity section to your expense section. The rationale is that when you run translation, you're doing so for a subsidiary that you own, or simply put, it's a foriegn investment. Thus, the gains/losses of that foriegn investment should be itemized on your P&L and not buried in equity. The FASB will include changes in financial statement presentation in its Exposure Draft that is planned for issuance at the end of April 2010. Companies will be required to adopt the financial statement presentation provisions retroactively. Yes, that means companies will need to apply these changes to previously issued financial statements. The FASB Summary of Board Decisions can be found at "fasb.org".

    Read the article

  • Functional Languages that compile to Android's Dalvik VM?

    - by Berin Loritsch
    I have a software problem that fits the functional approach to programming, but the target market will be on the Android OS. I ask because there are functional languages that compile to Java's VM, but Dalvik bytecode != Java bytecode. Alternatively, do you know if the dx utility can intelligently convert the .class files generated from functional languages like Scala? Edit: In order to add a bit more helpfulness to the community, and also to help me choose better, can I refine the question a bit? Have you used any alternate languages with Dalvik? Which ones? What are some "gotchas" (problems) that I might run into? Is performance acceptable? By that, I mean the application still feels responsive to the user. I've never done mobile phone development, but I grew up on constrained devices and I'm under no illusion that there is a cost to using non-standard languages with the platform. I just need to know if the cost is such that I should shoe-horn my approach into default language (i.e. apply functional principles in the OOP language).

    Read the article

  • UTF-8 encoding problem with flash mysql and php

    - by alibhp
    Hi, As you may know, I am programming an on-line game using FLASH. I am connecting my FLASH 8 movie with MySQL database through PHP. I am doing very good in that, and I have everything working fine. The problems come when I am trying to insert (Using the INSERT SQL func) data to the database that are non-english. In other words, UTF-8 data. I red a lot of articls about that stuff and found and apply the fallowing: 1. In PHP4, you need to tell the PHP to use UTF-8 when using the xml_parser_crater() func, however, in PHP5 that is done automatically. Even though I told PHP5 to use the UTF-8 when calling the func. Adding the header to the XML sent to PHP from flash. Force the FLASH to use UTF-8 encoding in the preference options. Set the encoding in MySQL to UTF-8 (utf8_unicode_ci with InnoDB engine). I can read and insert the other language data correctly in the phpadmin as well. I did all that in my coding, and still I can't insert such data. one more strange thing is that, when I use the same link, that the FLASH using, with the XML, that the FLASH creating, on the browser (google chrome), I got the data inserted right in the database!!!!! I am about to get crazy about that stuff, What am I missing? what cause the problem? Thank you in advance.

    Read the article

  • Panning with the OpenGL Camera / View Matrix

    - by Pris
    I'm gonna try this again I've been trying to setup a simple camera class with OpenGL but I'm completely lost and I've made zero progress creating anything useful. I'm using modern OpenGL and the glm library for matrix math. To get the most basic thing I can think of down, I'd like to pan an arbitrarily positioned camera around. That means move it along its own Up and Side axes. Here's a picture of a randomly positioned camera looking at an object: It should be clear what the Up (Green) and Side (Red) vectors on the camera are. Even though the picture shows otherwise, assume that the Model matrix is just the identity matrix. Here's what I do to try and get it to work: Step 1: Create my View/Camera matrix (going to refer to it as the View matrix from now on) using glm::lookAt(). Step 2: Capture mouse X and Y positions. Step 3: Create a translation matrix mapping changes in the X mouse position to the camera's Side vector, and mapping changes in the Y mouse position to the camera's Up vector. I get the Side vector from the first column of the View matrix. I get the Up vector from the second column of the View matrix. Step 4: Apply the translation: viewMatrix = glm::translate(viewMatrix,translationVector); But this doesn't work. I see that the mouse movement is mapped to some kind of perpendicular axes, but they're definitely not moving as you'd expect with respect to the camera. Could someone please explain what I'm doing wrong and point me in the right direction with this camera stuff?

    Read the article

  • Select tool to minimize JavaScript and CSS size

    - by Michael Freidgeim
    There are multiple ways and techniques how to combine and minify JS and CSS files.The good number of links can be found in http://stackoverflow.com/questions/882937/asp-net-script-and-css-compression and in http://www.hanselman.com/blog/TheImportanceAndEaseOfMinifyingYourCSSAndJavaScriptAndOptimizingPNGsForYourBlogOrWebsite.aspx There are 2 major approaches- do it during build or at run-time.In our application there are multiple user-controls, each of them required different JS or CSS files, and they loaded dynamically in the different combinations. We decided that loading all JS or CSS files for each page is not a good idea, but for each page we need to load different set of files.Based on this combining files on the build stage does not looks feasible.After Reviewing  different links I’ve decided that squishit should fit to our needs. http://www.codethinked.com/squishit-the-friendly-aspnet-javascript-and-css-squisherDifferent limitations of using SquishIt.We had some browser specific CSS files, that loaded conditionally depending of browser type(i.e IE and all other browsers). We had to put them in separate bundles,For Resources and AXD files we decide to use HttpModule and HttpHandler created by Mads KristensenTo GZIP html we are using wwWebUtils.GZipEncodePage() http://www.west-wind.com/weblog/posts/2007/Feb/05/More-on-GZip-compression-with-ASPNET-Content Just swap the order of which encoding you apply to start by asking for deflate support and then GZip afterwards.Additional tips about SquishIt.Use CDN: https://groups.google.com/group/squishit/browse_thread/thread/99f3b61444da9ad1Support intellisense and generate bundle in codebehind http://tech.kipusoep.nl/2010/07/23/umbraco-45-visual-studio-2010-dotless-jquery-vsdoc-squishit-masterpages/Links about other Libraries that were consideredA few links from http://stackoverflow.com/questions/5288656/which-one-has-better-minification-between-squishit-and-combres2.Net 4.5 will have out-of-the-box tools for JS/CSS combining.http://weblogs.asp.net/scottgu/archive/2011/11/27/new-bundling-and-minification-support-asp-net-4-5-series.aspx . It suggests default bundle of subfolder, but also seems supporting similar to squishit explicitly specified files.http://www.codeproject.com/KB/aspnet/combres2.aspx  config XML file can specify expiry etchttps://github.com/andrewdavey/cassette http://stackoverflow.com/questions/7026029/alternatives-to-cassetteDynamically loaded JS files requireJS http://requirejs.org/docs/start.html  http://www.west-wind.com/weblog/posts/2008/Jul/07/Inclusion-of-JavaScript-FilesPack and minimize your JavaScript code sizeYUI Compressor (from Yahoo)JSMin (by Douglas Crockford)ShrinkSafe (from Dojo library)Packer (by Dean Edwards)RadScriptManager  & RadStyleSheetManager -fromTeleric(not free)Tools to optimize performance:PageSpeed tools family http://code.google.com/intl/ru/speed/page-speed/download.htmlv

    Read the article

  • TechEd Europe early bird saving &ndash; register by 5th July

    - by Eric Nelson
    Another event advert alert :-) But this one comes with a cautious warning. I spoke at TechEd Europe last year. I found TechEd to be a huge, extremely well run conference filled with great speakers and passionate attendees in a top notch venue and fascinating city. As an “IT Pro” I think it is the premiere conference for Microsoft technologies in Europe. However, IMHO and those of others I trust, I didn’t think it hit the mark for developers in 2009. There was a fairly obvious reason – the PDC was scheduled to take place only a couple of weeks later which meant the “powder was being kept dry” and (IMHO) some of the best speakers on developer technologies were elsewhere. But I’m reasonably certain that this won’t be repeated this year (Err… Have I missed an announcement about “no pdc in 2010”?) Enjoy: Register for Tech·Ed Europe by 5 July and Save €500 Tech·Ed Europe returns to Berlin this November 8 – 12, for a full week of deep technical education, hands-on-learning and opportunities to connect with Microsoft and Community experts one-on-one.  Register by 5 July and receive your conference pass for only €1,395 – a €500 savings. Arrive Early and Get a Jumpstart on Technical Sessions Choose from 8 pre-conference seminars led by Microsoft and industry experts, and selected to give you a jumpstart on technical learning.  Additional fees apply.  Conference attendees receive a €100 discount.   Join the Tech·Ed Europe Email List for Event Updates Get the latest event news before the event, and find out more about what’s happening onsite.  Join the Tech·Ed Europe email list today!

    Read the article

  • Webcast: June 29th at 11am Eastern - Optimize ePermitting Reviews & Approvals with AutoVue

    - by Warren Baird
    I'm pleased to announce that the Enterprise Visualization special interest group (SIG) is organizing it's first webcast on June 29th - Palm Beach County is going to present how they use AutoVue as part of their e-permitting processes.  This is a must-see for anyone in the Public Sector, but even for people who aren't in the Public Sector, it should be very interesting to see how Palm Beach County has tied AutoVue tightly into their business processes.If you haven't already done so, I'd suggest joining up for our SIG at http://groups.google.com/group/enterprise_visualization_sig.The registration link for the webcast is: https://www3.gotomeeting.com/register/565294190 - more details are below:The Enterprise Visualization Special Interest Group (EVSIG) is proud to present the first in a series of webcasts designed to educate the AutoVue user community on innovative and compelling AutoVue solutions.  Attend the Webcast and discover how AutoVue can make building permit application and approval processes more efficient.Presenters:Oracle: Warren Baird, Principal Product Manager, AutoVue Enterprise VisualizationPalm Beach County: Paul Murphy, Systems IntegratorLaura Yonkers, Permit Section SupervisorChuck Lemon, Project Business AnalystAbstract:In their efforts to deliver better services to citizens, save money and “think green”, many cities, states and local governments have implemented online e-permitting processes that allow developers and citizens to apply for and receive building permits via the Web.Attend this webcast and discover how AutoVue visualization solutions enhance ePermitting processes by streamlining the review and approval of digital permit applications.  Hear from Palm Beach County about how they leveraging AutoVue within their ePermitting system to:·         provide structure to the land development review and approval process·         accelerate and improve efficiency throughout the permitting process·         decrease permit review times·         increase the level of transparency during the permit application and review process·         improve accountability in the organization·         improve citizen services by providing 24-7 ability to submit and track applicationsSign up for the Enterprise Visualization SIG to learn about future AutoVue Webcasts. Register today at http://groups.google.com/group/enterprise_visualization_sig and become a part of our growing online user community. We look forward to seeing you on the 29th of June.

    Read the article

  • Oracle and Microsoft Expand Choice and Flexibility in Deploying Oracle Software in the Cloud

    - by Gene Eun
    Oracle and Microsoft have entered into a new partnership that will help customers embrace cloud computing by providing greater choice and flexibility in how to deploy Oracle software.  Here are the key elements of the partnership: Effective today, our customers can run supported Oracle software on Windows Server Hyper-V and in Windows Azure Effective today, Oracle provides license mobility for customers who want to run Oracle software on Windows Azure Microsoft will add Infrastructure Services instances with popular configurations of Oracle software including Java, Oracle Database and Oracle WebLogic Server to the Windows Azure image gallery Microsoft will offer fully licensed and supported Java in Windows Azure Oracle will offer Oracle Linux, with a variety of Oracle software, as preconfigured instances on Windows Azure Oracle’s strategy and commitment is to support multiple platforms, and Microsoft Windows has long been an important supported platform.  Oracle is now extending that support to Windows Server Hyper-V and Window Azure by providing certification and support for Oracle applications, middleware, database, Java and Oracle Linux on Windows Server Hyper-V and Windows Azure. As of today, customers can deploy Oracle software on Microsoft private clouds and Windows Azure, as well as Oracle private and public clouds and other supported cloud environments. For information related to software licensing in Windows Azure, see Licensing Oracle Software in the Cloud Computing Environment. Also, Oracle Support policies as they apply to Oracle software running in Windows Azure or on Windows Server Hyper-V are covered in two My Oracle Support (MOS) notes which are shown below: MOS Note 1563794.1 Certified Software on Microsoft Windows Server 2012 Hyper-V - NEW MOS Note 417770.1 Oracle Linux Support Policies for Virtualization and Emulation - UPDATED

    Read the article

  • Who writes the words? A rant with graphs.

    - by Roger Hart
    If you read my rant, you'll know that I'm getting a bit of a bee in my bonnet about user interface text. But rather than just yelling about the way the world should be (short version: no UI text would suck), it seemed prudent to actually gather some data. Rachel Potts has made an excellent first foray, by conducting a series of interviews across organizations about how they write user interface text. You can read Rachel's write up here. She presents the facts as she found them, and doesn't editorialise. The result is insightful, but impartial isn't really my style. So here's a rant with graphs. My method, and how it sucked I sent out a short survey. Survey design is one of my hobby-horses, and since some smartarse in the comments will mention it if I don't, I'll step up and confess: I did not design this one well. It was potentially ambiguous, implicitly excluded people, and since I only really advertised it on Twitter and a couple of mailing lists the sample will be chock full of biases. Regardless, these were the questions: What do you do? Select the option that best describes your role What kind of software does your organization make? (optional) In your organization, who writes the text on your software user interfaces? (for example: button names, static text, tooltips, and so on) Tick all that apply. In your organization who is responsible for user interface text? Who "owns" it? The most glaring issue (apart from question 3 being a bit broken) was that I didn't make it clear that I was asking about applications. Desktop, mobile, or web, I wouldn't have minded. In fact, it might have been interesting to categorize and compare. But a few respondents commented on the seeming lack of relevance, since they didn't really make software. There were some other issues too. It wasn't the best survey. So, you know, pinch of salt time with what follows. Despite this, there were 100 or so respondents. This post covers the overview, and you can look at the raw data in this spreadsheet What did people do? Boring graph number one: I wasn't expecting that. Given I pimped the survey on twitter and a couple of Tech Comms discussion lists, I was more banking on and even Content Strategy/Tech Comms split. What the "Others" specified: Three people chipped in with Technical Writer. Author, apparently, doesn't cut it. There's a "nobody reads the instructions" joke in there somewhere, I'm sure. There were a couple of hybrid roles, including Tech Comms and Testing, which sounds gruelling and thankless. There was also, an Intranet Manager, a Creative Director, a Consultant, a CTO, an Information Architect, and a Translator. That's a pretty healthy slice through the industry. Who wrote UI text? Boring graph number two: Annoyingly, I made this a "tick all that apply" question, so I can't make crude and inflammatory generalizations about percentages. This is more about who gets involved in user interface wording. So don't panic about the number of developers writing UI text. First off, it just means they're involved. Second, they might be good at it. What? It could happen. Ours are involved - they write a placeholder and flag it to me for changes. Sometimes I don't make any. It's also not surprising that there's so much UX in the mix. Some of that will be people taking care, and crafting an understandable interface. Some of it will be whatever text goes on the wireframe making it into production. I'm going to assume that's what happened at eBay, when their iPhone app purportedly shipped with the placeholder text "Some crappy content goes here". Ahem. Listing all 17 "other" responses would make this post lengthy indeed, but you can read them in the raw data spreadsheet. The award for the approach that sounds the most like a good idea yet carries the highest risk of ending badly goes to whoever offered up "External agencies using focus groups". If you're reading this, and that actually works, leave a comment. I'm fascinated. Who owned UI text Stop. Bar chart time: Wow. Let's cut to the chase, and by "chase", I mean those inflammatory generalizations I was talking about: In around 60% of cases the person responsible for user interface text probably lacks the relevant expertise. Even in the categories I count as being likely to have relevant skills (Marketing Copywriters, Content Strategists, Technical Authors, and User Experience Designers) there's a case for each role being unsuited, as you'll see in Rachel's blog post So it's not as simple as my headline. Does that mean that you personally, Mr Developer reading this, write bad button names? Of course not. I know nothing about you. It rather implies that as a category, the majority of people looking after UI text have neither communication nor user experience as their primary skill set, and as such will probably only be good at this by happy accident. I don't have a way of measuring those frequency of those accidents. What the Others specified: I don't know who owns it. I assume the project manager is responsible. "copywriters" when they wish to annoy me. the client's web maintenance person, often PR or MarComm That last one chills me to the bone. Still, at least nobody said "the work experience kid". You can see the rest in the spreadsheet. My overwhelming impression here is of user interface text as an unloved afterthought. There were fewer "nobody" responses than I expected, and a much broader split. But the relative predominance of developers owning and writing UI text suggests to me that organizations don't see it as something worth dedicating attention to. If true, that's bothersome. Because the words on the screen, particularly the names of things, are fundamental to the ability to understand an use software. It's also fascinating that Technical Authors and Content Strategists are neck and neck. For such a nascent discipline, Content Strategy appears to have made a mark on software development. Or my sample is skewed. But it feels like a bit of validation for my rant: Content Strategy is eating Tech Comms' lunch. That's not a bad thing. Well, not if the UI text is getting done well. And that's the caveat to this whole post. I couldn't care less who writes UI text, provided they consider the user and don't suck at it. I care that it may be falling by default to people poorly disposed to doing it right. And I care about that because so much user interface text sucks. The most interesting question Was one I forgot to ask. It's this: Does your organization have technical authors/writers? Like a lot of survey data, that doesn't tell you much on its own. But once we get a bit dimensional, it become more interesting. So taken with the other questions, this would have let me find out what I really want to know: What proportion of organizations have Tech Comms professionals but don't use them for UI text? Who writes UI text in their place? Why this happens? It's possible (feasible is another matter) that hundreds of companies have tech authors who don't work on user interfaces because they've empirically discovered that someone else, say the Marketing Copywriter, is better at it. And once we've all finished laughing, I'll point out that I've met plenty of tech authors who just aren't used to thinking about users at the point of need in the way UI text and embedded user assistance require. If you've got what I regard, perhaps unfairly, as the bad kind of tech author - the old-school kind with the thousand-page pdf and the grammar obsession - if you've got one of those then you probably are better off getting the UX folk or the copywriters to do your UI text. At the very least, they'll derive terminology from user research.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >