Search Results

Search found 1886 results on 76 pages for 'dragon naturally speaking'.

Page 28/76 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Should concrete classes avoid calling other concrete classes, except for data objects?

    - by Kazark
    In Appendix A to The Art of Unit Testing, Roy Osherove, speaking about ways to write testable code from the start, says, An abstract class shouldn't call concrete classes, and concerete classes shouldn't call concrete classes either, unless they're data objects (objects holding data, with no behavior). (259) The first half of the sentence is simply Dependency Inversion from SOLID. The second half seems rather extreme to me. That means that every time I'm going to write a class that isn't a simple data structure, which is most classes, I should write an interface or abstract class first, right? Is it really worthwhile to go that far in defining abstract classes an interfaces? Can anyone explain why in more detail, or refute it in spite of its benefit for testability?

    Read the article

  • Google Top Geek E02

    Google Top Geek E02 In Spanish! Google Top Geek is a weekly show that will cover the latest news on all things Google in Spanish speaking Latin America, trending searches, YouTube videos and apps in the region; as well as news and relevant events for developers. Mondays at noon, 12 PM, in Google Developers Live and the blog Programa con Google. Créditos E02: Agradecemos a Elefgant el apoyo para la grabación y edición. From: GoogleDevelopers Views: 4106 4 ratings Time: 15:43 More in Science & Technology

    Read the article

  • Which specific programming activities do women, on average, perform better than men? [closed]

    - by blueberryfields
    Following a recent discussion with female associates in hiring positions for software development/engineering positions, I found out that this kind of information would be incredibly useful to helping make sure that the workforce shows a gender balance. So I went looking. I've found various literature speaking about risk-taking behaviour and patterns, and other statistical differences between men and women when it comes to work performance. See for example this article related to hedge fund management. I have yet to see any such comparison in the computing field. To restate the question: Which specific programming activities do women, on average, perform better than men? Please back up your answers with specific details, preferably by linking to relevant research or, failing that, explaining what you're basing the information on.

    Read the article

  • This November, Join Me in Stockholm and Amsterdam

    - by Adam Machanic
    Late last year, I was invited by Raoul Illyés, a SQL Server MVP from Denmark, to present a precon at the 2013 edition of SQLRally Nordic. I agreed and decided to skip the US PASS Summit this year and instead visit an area of Europe I've never seen before. A bonus came a while later when I learned that there is another SQLRally in Europe that same week: SQLRally Amsterdam. Things worked out in just the right way and today I'm happy to announce that I'll be speaking at both events, back-to-back. Should...(read more)

    Read the article

  • Tampa Code Camp - October 13, 2012

    - by Nikita Polyakov
    I am pleased to announce Tampa Code Camp 2012 is being co-hosted with Bar Camp Tampa Bay this year in Tampa, FL on October 13th 2012 at USF Main Campus beautiful Business buildings.“CodeCamp is a FREE one-day meeting forum that allows software developers to share their knowledge and experience with Microsoft products and services. It’s similar to Tech-Ed, but community-driven by a group of dedicated volunteers and speakers while financially supported through generous sponsors and local businesses.”As one of the organizers, I will not be speaking, but instead helping MC the Component Vendor ShowDown - a special track dedicated to battling out the best components organized by focus application rather then firm. Check out the ShowDown track in the Agenda.WHEN: Saturday, October 13, 2012, 730AM – 545 PMWHERE: USF - Tampa Campus, 4202 East Fowler Avenue, Tampa, FL 33620REGISTRATION: http://www.tampacodecamp.comCOST: FREE

    Read the article

  • What is MVC, really?

    - by NickC
    As a serious programmer, how do you answer the question What is MVC? In my mind, MVC is sort of a nebulous topic — and because of that, if your audience is a learner, then you're free to describe it in general terms that are unlikely to be controversial. However, if you are speaking to a knowledgeable audience, especially an interviewer, I have a hard time thinking of a direction to take that doesn't risk a reaction of "well that's not right!...". We all have different real-world experience, and I haven't truly met the same MVC implementation pattern twice. Specifically, there seem to be disagreements regarding strictness, component definition, separation of parts (what piece fits where), etc. So, how should I explain MVC in a way that is correct, concise, and uncontroversial?

    Read the article

  • Identical spam coming from many different (but similar) IP addresses

    - by DisgruntledGoat
    A forum I run has been the victim of spam user accounts recently - several accounts that have been registered and the profile fill with advertising/links. All of this is for the same company, or group of companies. I deleted several accounts weeks ago and blocked some IP addresses, but today they have come back with the same spam. Every account has a different IP address, but they are all of the form 122.179.*.* or 122.169.*.*. I am considering blocking those two IP ranges, but there are potentially thousands of IPs in that range. They appear to be assigned to India (although the spam is for an American company) so given the site is for a western, English-speaking audience maybe it doesn't matter. My questions: How are they posting on so many IPs? Is there likely to be a limit to the number of IPs they have access to? Is there anything else I can do at the IP-level to block them? (I am looking into other measures like blocking usernames/links.)

    Read the article

  • 2013 Microsoft ASP.NET/IIS MVP

    - by Vincent Maverick Durano
    Originally posted on: http://geekswithblogs.net/dotNETvinz/archive/2013/07/01/2013-microsoft-asp.netiis-mvp.aspxI am very honored to have received this award again. This is my fifth year in a row now and it feels really great! ;) That past year was a really blast and had a great time with the MVP Global Summit, was able to create and published new versions of my open-source controls at Codeplex, technical forum contributions, blogging,writing articles and speaking. I’m glad and  very happy that I made it again this year despite of all the busy stuffs at work and life, I still manage to contribute to the ASP.NET community. BIG thanks to God, Microsoft, my MVP lead Lilian Quek, Clarisse Ng our SEA MVP Program Specialist, my family, my great Boss, readers and friends who have supported me. Technorati Tags: MVP,ASP.NET,Community

    Read the article

  • GlassFish 4.0 Virtualization Progress - VirtualBox

    - by alexismp
    Wouldn't it be nice if you could spawn GlassFish instances as VirtualBox virtual machines? Well now with early versions of GlassFish 4.0 you can! This page on the GlassFish Wiki documents the steps to get this to work. It walks you through the various VirtualBox (network and services) and GlassFish configuration steps including the creation of VDI templates (typically JeOS images) to finally create a virtual machine on the fly, as part of the typical GlassFish deployment process. The more general virtualization support in GlassFish is discussed in this other Wiki page. Earlier demonstrations of GlassFish.next prototypes or early milestone builds showed support for KVM, "laptop mode" and OVM as well as community involvement from Serli, speaking of which this slide-deck is a good summary of what we're trying to achieve in the GlassFish 4.0 IMS (IaaS Management Service).

    Read the article

  • Considerations when designing a file type

    - by AndyBursh
    I'm about to start writing a process for saving some data structure from code in to a file of some proprietary, as-yet-undefined type. However, I've never designed a file type or structure before. Are there any things, generally speaking, that I should consider before starting my design? Are there any accepted good practices here? Bad practices I should avoid? Any absolute do's and don'ts? Can anybody recommend any good reading on this topic?

    Read the article

  • Reputable web host in mainland China? [closed]

    - by darren
    Possible Duplicate: How to find web hosting that meets my requirements? We currently have a rather poorly set up Windows 2003 box with little to no support based in Shanghai; with no control panel/mail server. I am told for legal/business reasons the host must be based in the same location as the company for the website; but this could well be misinformation. Are there any well-known, quality hosts in China that offer reliable English-speaking support? We did consider GoDaddy on the west coast of America, but were informed of the risk of the site being shut down without any notice. We don't have any technically-minded contacts out there to advise, and hoping that someone will have some more experience in this department. Thank you.

    Read the article

  • Is there a way to publish IOS app from windows/Linux?

    - by user65760
    So I have been using Linux(especially, ubuntu) and windows(windows 7) for a long time . But i dont have a MAC, neither do i have an iphone. I do not actually want to buy them either . So the problem here is :how do i publish my app from windows or linux ? Kindly do understand i am not speaking about jailbroken programs(for jail broken i phones), i do not have any one near me who will lend me a MAC to publish my app . I started learning objective C some time ago. However, whenever i search the internet i get this information that there is no full proof way of publishing an app from windows or Linux . I also do intend to make it a paid app, meaning i dont wanna make it free. It will be very helpful if someone can suggest a way to overcome this problem .

    Read the article

  • Java EE@NYC Java Meetup

    - by reza_rahman
    On November 19th, I spoke at the New York City Java Meetup Group. It's a well-organized group led by my good friends Dario Laverde and Timothy Fagan - I have spoken there numerous times. I did my Java EE 7 talk (the same one from Java2Days 2012). JavaEE.Next(): Java EE 7, 8, and Beyond from reza_rahman The talk went very well -- the official RSVP shows 163 attended. I gave away a few GlassFish T-shirts, laptop stickers and Arun Gupta's Java EE 6 pocket guide. More details on the talk here. I most certainly look forward to speaking there again.

    Read the article

  • What's the most productive coding environment

    - by Ubiguchi
    I was speaking with an ex-colleague the other day about the most productive way to write code and he said he found it best "to CIMP, or Code In My Pants". When I asked him exactly what he meant, he explained he found it best to work at home, coding at his own pace, dressed comfortably (in his pants), and communicating with his team through emails, IM, or the telephone. Digesting his approach (which he describes to clients as the Complete Integrated Method of Programming), I realised my coding is also more productive when working in an isolated environment, which made me wonder if the software industry has got it all wrong and should development be really done by dispersed teams of individuals, or are there advantages to geographical herding that make up for the added interruptions it brings? So has business got it wrong? Should development occur predominantly across geographically isolated individuals to increase productivity, or are there real reasons why herding developers together makes sense?

    Read the article

  • November 2012 Chicago IT Architects Group Meeting Announcement

    - by Tim Murphy
    The year is quickly coming to an end.  This is the most exciting part of the year with technology manufacturers in overdrive trying to release as many products for Christmas as possible.  Our group is trying to do our part to bring order to the madness with one last presentation for the year.  Norman Murrin will be speaking on November 20th on Adopting Agile Processes in the Enterprise.  Be sure to join us by registering at the link below. Register del.icio.us Tags: Chicago Information Technology Architects Group,CITAG,Agile,Architecture

    Read the article

  • Windows Telephone Scam Continues to Circulate

    Microsoft addressed the scam via a blog post during the middle of last year. Cyberthieves call homes in English-speaking countries after finding their phone numbers in telephone directories. The callers usually identify themselves as engineers from Windows Support or other legitimate-sounding organizations. They claim that your computer has been sending error messages and may have been compromised. To fix the problem, they offer a free security check. Despite being detected last year, this particular scam is still making the rounds. A recent article by news channel ABC 15 out of Arizona r...

    Read the article

  • I will be at NNUG Kristiansand tonight

    - by Sahil Malik
    SharePoint 2010 Training: more information Greetings! I will be speaking at NNUG Kristiansand tonight (sorry for the very short notice). So I was thinking what should I demo tonight?Hmm!!!Instead of using any slides or such material, what I thought would be a tonne of fun, would be to develop a service bus based application running in Azure. This will be a good opportunity to code and talk, and run into some snafus and show off some VS2012 improvements/annoyances. At the end of an apprx. 1 hour talk, I hope to have an application ready that you can run yourself by the end of the evening. hehe :) .. now that’s cool isn’t it? :) .. Time permitting I will even tie SP2013 + ADFSv2 + Claims into it, just to be cool. Hope to see you there! Here is the registration link - http://www.nnug.no/Avdelinger/Kristiansand/Moter2/NNUG-Kristiansand---Oktober/ Read full article ....

    Read the article

  • Pattern for loading and handling resources

    - by Enoon
    Many times there is the need to load external resources into the program, may they be graphics, audio samples or text strings. Is there a patten for handling the loading and the handling of such resources? For example: should I have a class that loads all the data and then call it everytime I need the data? As in: GraphicsHandler.instance().loadAllData() ...//and then later: draw(x,y, GraphicsHandler.instance().getData(WATER_IMAGE)) //or maybe draw(x,y, GraphicsHandler.instance().WATER_IMAGE) Or should I assign each resource to the class where it belongs? As in (for example, in a game): Graphics g = GraphicsLoader.load(CHAR01); Character c = new Character(..., g); ... c.draw(); Generally speaking which of these two is the more robust solution? GraphicsHandler.instance().getData(WATER_IMAGE) //or GraphicsHandler.instance().WATER_IMAGE //a constant reference

    Read the article

  • Business Analyst vs. Architect [closed]

    - by suslik
    I'm a developer of a few years in the financial industry and will soon need to decide what career path to try and row towards. Broadly speaking I have two options: something more 'people' oriented like BAs, or keep coding and try to make more technical decisions like the Architects do where I currently work. Here are my perceptions right now: Business Analysts: get paid way more than devs once they do their job, it seems like they usually have no worries more likely to go REALLY high up in the organization (VPs, etc) Architects: things like certification matters (I see this as a con) called in when things go wrong more than anyone else (weekends & overtime) long career path to get to (dev - senior dev - team lead - architect) I would find the latter more intellectually rewarding, but when I look at it I just can't justify it in terms of lifestyle. Am I wrong / what am I missing? Can you really make a lot of money in a technical role or must you really get out of coding? Thank you for any constructive input.

    Read the article

  • Microsoft Secret Event: New Tablet Unveiling?

    If you read the headline, you know what everyone thinks it will be: a new tablet computer, that Microsoft will manufacture from beginning to end. Apparently, the company believes it will be better able to compete against Apple if it controls both the hardware and the software. But why choose this location for the announcement? Wired thinks it makes sense if the tablet features Xbox live streaming. That would turn the humble device into something of a media machine. Speaking of the device itself, what kind of specs will this hypothetical tablet have? It's hard to say. Microsoft boasts software...

    Read the article

  • Twin Cities Connected Systems User Group Meeting - March 11th, 2010

    If you are in are in Minneapolis on Thursday March 11th please join us for the Twin Cities Connected Systems User Group Meeting. The meeting takes place at 6:00 p.m. at the Microsoft offices at 8300 Norman Center Drive, Bloomington, MN 55437.  I will be speaking on How to Create Windows Server AppFabric Applications Here is a write-up of what will be covered: You have heard about Dublin, now called Windows Server AppFabric, but do you know what it is and what it includes?  Do you know...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Rendering trillions of "atoms" instead of polygons?

    - by Baring
    I just saw a video about what the publishers call the "next major step after the invention of 3D". According to the person speaking in it, they use a huge amount of atoms grouped into clouds instead of polygons, to reach a level of unlimited detail. They tried their best to make the video understandable for persons with no knowledge of any rendering techniques, and therefore or for other purposes left out all details of how their engine works. The level of detail in their video does look quite impressive to me. How is it possible to render scenes using custom atoms instead of polygons on current hardware? (Speed, memory-wise) If this is real, why has nobody else even thought about it so far? I'm, as an OpenGL developer, really baffled by this and would really like to hear what experts have to say. Therefore I also don't want this to look like a cheap advert and will include the link to the video only if requested, in the comments section.

    Read the article

  • Do you sign each of your source files with your name? [duplicate]

    - by regularfry
    Possible Duplicate: How do you keep track of the authors of code? One of my colleagues is in the habit of putting his name and email address in the head of each source file he works on, as author metadata. I am not; I prefer to rely on source control to tell me who I should be speaking to about a given set of functionality. Should I also be signing files I work on for any other reasons? Do you? If so, why? To be clear, this is in addition to whatever metadata for copyright and licensing information is included, and applies to both open sourced and proprietary code.

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • Alcatel-Lucent: Enterprise 2.0: The Top 5 Things I would Do Over

    - by Kellsey Ruppel
    Happy Monday! Does anyone else feel as if the weekend went entirely too quickly? At least for those of us in the United States, we have the 4th of July Holiday next week to look forward to This week on the blog, we are going to focus on "WebCenter by Example" and highlight best practices from customers and partners. I recently came across this article and I think this is a great example of how we can learn from one another when it comes to social collaboration adoption. Do you agree with Jem? What things or best practices have you learned in your organizations?  By Jem Janik, Enterprise community manager, Alcatel-Lucent  Not so long ago, Engage, the Alcatel-Lucent employee social network and collaboration platform, celebrated its third birthday. With more than 25,000 members actively interacting each month, Engage has been a big enough success that it’s been the subject of external articles, and often those of us who helped launch it will go out and speak about what aspects contributed to that success. Hindsight is still 20/20 and what it takes to successfully launch an enterprise 2.0 community is fairly well-known now.  Today I want to tell you what I suspect you really want to know about.  As the enterprise community manager for Engage, after three years in, what are the top 5 things I wish we (and I mostly mean me) could do over? #5 Define your analytics solution from the start There is so much to do when you launch a community and initially growing it without complete chaos is quite a task.  It doesn’t take too long to get to a point where you want to focus your continued efforts in growing company collaboration.  Do people truly talk across regional boundaries or have we shifted siloed conversations to a new platform.  Is there one organization that doesn’t interact with another? If you are lucky you’ll have someone in your community team well versed in the world of databases and SQL queries, but it takes time to figure out what backend analytics data actually means. Professional support can be expensive and it may be hard to justify later as it typically has the community manager as the only main customer.  Figure out what you think you’ll want to know and how to get it early on. The sooner the better even if it doesn’t seem that critical at the time. #4 Lobbies guide you to the right places One piece of feedback that comes up more and more as we keep growing Engage is it’s hard to find stuff, or new people are not sure where to start. Something we’re doing now is defining some general topic areas of interest to be like “lobbies” into the platform and some common hashtags to go with them. I liken this to walking into a large medical or professional building for the first time.  There are hundreds of offices, and you look to a sign in the lobby to get guided to the right place for you.  We’re building that sign for members now, but again we missed the boat as the majority of the company has had their initial Engage experience. #3 Clean up, clean up, clean up Knowledge work and folksonomies are messy! The day we opened the doors to Engage I would have said we should keep everything ever created in Engage with an argument that it was a window into our collective knowledge so nothing should go.  Well, 6000+ groups and 200,000+ pieces of content later, I’ve changed my mind.  As previously mentioned, with too much “stuff” the system can be overwhelming to new members and it makes it harder to get what you’re looking for.   Do we need that help document about a tool we no longer have? NO!  Do we need that group that had 1 document and 2 discussions in the last two years? NO! Should we only have one group about a given topic instead of 4?  YES! Last fall, Engage defined a cleanup process for groups not used for a long time.  We also formed a volunteer cleaning army who are extra eyes on the hunt for “stuff” that should be updated, merged, or deleted.  It’s better late than never, but in line with what’s becoming a theme I wish these efforts had started earlier. #2 Communications & local community management One of the most important aspects of my job is to make sure people who should be talking to each other are actually doing it.  Connecting people to the other people they should know, the groups they should join, a piece of content that shouldn’t be missed.   I have worked both inside and outside of communications teams, and they are the best informed people in your company.  They know when something big is coming, how it impacts employees, how it fits with strategy, who else knows more, etc.  Having communications professionals who are power users can help scale up community management because they are already so well connected.  They also need to have the platform skills to pay attention without suffering email overload, how to grab someone’s attention, etc.  I wish I’d had figured this out much earlier.  If I had I would have groomed more communications colleagues into advocates and power members right at the start. #1 Grooming advocates vs. natural advocates I’ve just alluded to this above already. The very best advocates are those who naturally embrace your platform and automatically start to see new ways to work within it.  Those advocates seem to come out of the woodwork naturally since some of them are early adopters.  Not surprisingly, our best advocates today are those same people who were willing to come kick the tires when the community was completely empty.  Unfortunately, we didn’t get a global spread of those natural advocates.  I did ask around when we first launched for other people who might be good candidates, but didn’t push too hard as there were so many other things to get ready.  That was a mistake.  If I could get a redo I would have formally asked for people to be assigned where there were gaps and groomed them into an advocate.  Today as we find new advocates to fill the gaps, people are hesitant as the initial set has three years of practice are ahead of the curve power members; it definitely would have been easier earlier on. As fairly early adopters to corporate scale enterprise collaboration, there hasn’t been a roadmap to follow as we’ve grown Engage, which is part of the fun! It’s clear a lot of issues are more easily tackled the earlier you identify and begin to correct them, and I’ve identified the main five I wish I could redo.  In the spirit of collaboration, I hope someone else learns from my mistakes! View the original article by Jem here. 

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >