Search Results

Search found 12338 results on 494 pages for 'compilation tool'.

Page 405/494 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Advice for how to handle company pride

    - by user17971
    We have this "amazing" little product using the latest development methodologies, components with all the bells and whistles. I took over this product maybe 6 months ago and struggled with it from day one. Even though it is supposedly is state of the art because of all its amazing structure, using dependency injections, inversion of control from the unity framework, hibernation and is domain driven in a .net mvvm xaml application to make it streamlined and modular. I knew from the moment I saw the monolith that it was going to be an uphill struggle for me. A lot of little code-bits scattered all around in neatly organized paradigms. Debugging is difficult, tracing the code is difficult, making new code is difficult, although some modifications is surprisinly easy but it doesn't out weight the problems I have with the code by a long shot. When I took over the project I was told that the new management console was ready for delivery and all I had to do was compile it and drop it. This was the beginning of a uphill struggle, our customer didn't agree at all that this was the functionality they had asked for so I had to do modifications to the program to their specifications. Since the project pretty much has been overdue since I took over it it has always been important that we didn't add or change much to the original system. I could modify the existing bits. fast forward until today where I finally completed all their comments and issues with the program but now I think that the users has opened their eyes (even though they saw this program many times) that they will be going backwards with this new system, that it will be much worse than the tool they got today (for a long time due to the fact that I'm the only resource on the project, project manager, tester, developer, integration specialist etc) My problem is that I lost faith in this system quite early due to the nature of the program. Although I made many changes and improvements to the system I wholeheartedly sympathize with the poor users who are going to start using this system. Its not nearly doing all the things it should do. I had this conversation internally with my boss where I told him what I thought about it, that if I were the customer I wouldn't have spent money developing it. So what do I do now? The system in ready, on a staging system and nobody likes it, its too slow and boring and does maybe do 50% of what they need it to do. Despite how much energy and working around the clock I've done to this project: I won't mind scrapping the system but we've spent much money (well my salaries) developing it and my company wants us to be proud of everything we do and advocate it. How will I tackle the contractor when he asks for advice? Surely I can tell him, this is what we agreed upon based on your use case scenarios, and be done with it? How will I inform my boss about this progress? He knows what I feel about it but I always get the feeling he let my criticism pass him by as just hot air, gone tomorrow,.

    Read the article

  • Workflow 4.5 is Awesome, cant wait for 5.0!

    - by JoshReuben
    About 2 years ago I wrote a blog post describing what I would like to see in Workflow vnext: http://geekswithblogs.net/JoshReuben/archive/2010/08/25/workflow-4.0---not-there-yet.aspx At the time WF 4.0 was a little rough around the edges – the State Machine was on codeplex and people were simulating state machines with Flowcharts. Last year I built a near- realtime machine management system using WF 4.0.1 – its managing the internal operations of this device: http://landanano.com/products/commercial   Well WF 4.5 has come a long way – many of my gripes have been addressed: C# expressions - no more VB 'AndAlso' clauses state machine awesomeness - can query current state many designer improvements - Document Outline is so much more succinct than Designer! Separate WCF Service Contract interfaces and ability to generate activities from contract operations ability to rehydrate to updated flow definitions via DynamicUpdateMap and WorkflowIdentity you can read about the new features here: http://msdn.microsoft.com/en-us/library/hh305677(VS.110).aspx   2013 could be the year of Workflow evangelism for .NET, as it comes together as the DSL language. Eg on Azure it could be used to graphically orchestrate between WebRoles, WorkerRoles and AppFabric Queues and the ServiceBus – that would be grand.   Here’s a list of things I’d like to see in Workflow 5.0: Stronger Parallelism support for true multithreaded workflows . A Workflow executes on a single thread – wouldn’t it be great if we had the ability to model TPL DataFlow? Parallel is not really parallel, just allows AsyncCodeActivity.     support for recursion an ExpressionTree activity with an editor design surface a math activity pack return of application level protocol (3.51 WF services) – automatically expose a state machine as a WCF service with bookmark Receive activities generated from OperationContract automatically placed in state transition triggers. A new HTML5 ActivityDesigner control – support with different CSS3  skinnable hooks,  remote connectivity (had to roll my own) A data flow view – crucial to understanding the big picture Ability to refactor a Sequence to custom activity in a separate .xaml file – like Expression Blend does for UserControl state machine global error handling - if all states goto an error state, you quickly get visual spagetti. Now you could nest a state machine, but what if you want an application level protocol whereby each state exposes certain WCF ops. DSL RAD editing - Make the Document Outline into a DSL editor for adding activities  – For WF to really succeed as a higher level of abstraction, It needs to be more productive than raw coding - drag & drop on the designer is currently too slow compared to just typing code. Extensible Wizard API - for pluggable WF editor experience other execution models beyond Sequence, Flowchart & StateMachine: SSIS, Behavior Trees,  Wolfram Model tool – surprise us! improvements to Designer debugging API - SourceLocation is tied to XAML file line number and char position, and ModelService access seems convoluted - why not leverage WPF LogicalTreeHelper / VisualTreeHelper ? Workflow Team , keep on rocking!

    Read the article

  • Handy Generic JQuery Functions

    - by Steve Wilkes
    I was a bit of a late-comer to the JQuery party, but now I've been using it for a while it's given me a host of options for adding extra flair to the client side of my applications. Here's a few generic JQuery functions I've written which can be used to add some neat little features to a page. Just call any of them from a document ready function. Apply JQuery Themeroller Styles to all Page Buttons   The JQuery Themeroller is a great tool for creating a theme for a site based on colours and styles for particular page elements. The JQuery.UI library then provides a set of functions which allow you to apply styles to page elements. This function applies a JQuery Themeroller style to all the buttons on a page - as well as any elements which have a button class applied to them - and then makes the mouse pointer turn into a cursor when you mouse over them: function addCursorPointerToButtons() {     $("button, input[type='submit'], input[type='button'], .button") .button().css("cursor", "pointer"); } Automatically Remove the Default Value from a Select Box   Required drop-down select boxes often have a default option which reads 'Please select...' (or something like that), but once someone has selected a value, there's no need to retain that. This function removes the default option from any select boxes on the page which have a data-val-remove-default attribute once one of the non-default options has been chosen: function removeDefaultSelectOptionOnSelect() {     $("select[data-val-remove-default='']").change(function () {         var sel = $(this);         if (sel.val() != "") { sel.children("option[value='']:first").remove(); }     }); } Automatically add a Required Label and Stars to a Form   It's pretty standard to have a little * next to required form field elements. This function adds the text * Required to the top of the first form on the page, and adds *s to any element within the form with the class editor-label and a data-val-required attribute: function addRequiredFieldLabels() {     var elements = $(".editor-label[data-val-required='']");     if (!elements.length) { return; }     var requiredString = "<div class='editor-required-key'>* Required</div>";     var prependString = "<span class='editor-required-label'> * </span>"; var firstFormOnThePage = $("form:first");     if (!firstFormOnThePage.children('div.editor-required-key').length) {         firstFormOnThePage.prepend(requiredString);     }     elements.each(function (index, value) { var formElement = $(this);         if (!formElement.children('span.editor-required-label').length) {             formElement.prepend(prependString);         }     }); } I hope those come in handy :)

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Who Are the BI Users in Your Neighborhood?

    - by [email protected]
    By Brian Dayton on March 19, 2010 10:52 PM Forrester's Boris Evelson recently wrote a blog titled "Who are the BI Personas?" that I enjoyed for a number of reasons. It's a quick read, easy to grasp and (refreshingly) focuses on the users of technology VS the technology. As Evelson admits, he meant to keep the reference chart at a high-level because there are too many different permutations and additional sub-categories to make such a chart useful. For me, I wouldn't head into the technical permutations but more the contextual use of BI and the issues that users experience. My thoughts brought up more questions than answers such as: Context: - HOW: With the exception of the "Power User" persona--likely some sort of business or operations analyst? - WHEN: Are they using the information to make real-time decisions on the front lines (a customer service manager or shipping/logistics VP) or are they using this information for cumulative analysis and business planning? Or both? - WHERE: What areas of the business are more or less likely to rely on BI across an organization? Human Resources, Operations, Facilities, Finance--- and why are some more prone to use data-driven analysis than others? Issues: - DELAYS & DRAG ON IT?: One of the persona characteristics Evelson calls out is a reliance on IT. Every persona except for the "Power User" has a heavy reliance on IT for support. What business issues or delays does that cause to users? What is the drag on IT resources who could potentially be creating instead of reporting? - HOW MANY CLICKS: If BI is being used within the context of a transaction (sales manager looking for upsell opportunities as an example) is that person getting the information within the context of that action or transaction? Or are they minimizing screens, logging into another application or reporting tool, running queries, etc.? Who are the BI Users in your neighborhood or line of business? Do Evelson's personas resonate--and do the tools that he calls out (he refers to it as "BI Style") resonate with what your personas have or need? Finally, I'm very interested if BI use is viewed as a bolt-on...or an integrated part of your daily enterprise processes?

    Read the article

  • The State of the Internet -- Retail Edition

    - by David Dorf
    Over at Business Insider, there's a great presentation on the State of the Internet done in the Mary Meeker style.  Its 138 slides so I took the liberty of condensing it down to the 15 slides that directly apply to the retail industry.  However, I strongly recommend looking at the entire deck when you have time.  And while you're at it, Business Insider just launched a retail portal that's dedicated to retail industry content.  Please check it out as well.  My take-aways are below after the slide show. &amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt; [Source: Business Insider] Here are a few things I took away from the statistics: Facebook and Twitter are in their infancy.  While all retailers should have social programs, search is still the driver and therefore should receive the lions share of investment.  Facebook referrals are up 92% year-over-year, but Google still does 80% of the referrals. E-commerce continues to grow at breakneck speed, but in-store commerce is still king. Stores are not showrooms yet.  And social commerce pure-plays like Gilt and Groupon are tiny but worthy of some attention. There are more smartphones than PCs on the internet, and the disparity will continue to grow. PC growth will be flat and Tablet use will continue to grow. Mobile accounts for 12% of all internet traffic. A quarter of smartphone sales come from China, so anyone with a presence there better have a strong mobile strategy. 38% of people have used their smartphone to make a purchase, and many use their smartphones inside stores.  Smartphones are a critical consumer tool for shopping. Mobile is starting to drive significant traffic to e-commerce sites, especially tablets.  Tablet strategies are crucial for retailers. Mobile payments from the likes of Paypal and Square are growing quickly.  It will be interesting to see how NFC plays in this area. Mobile operating systems are losing market share to iOS and Android.  I wonder in Microsoft can finally make a dent? The internet is being dominated by mobile devices, and retailers had better have a strong mobile strategy to meet consumer demand.

    Read the article

  • Confusion about inheritance

    - by Samuel Adam
    I know I might get downvoted for this, but I'm really curious. I was taught that inheritance is a very powerful polymorphism tool, but I can't seem to use it well in real cases. So far, I can only use inheritance when the base class is an abstract class. Examples : If we're talking about Product and Inventory, I quickly assumed that a Product is an Inventory because a Product must be inventorized as well. But a problem occured when user wanted to sell their Inventory item. It just doesn't seem to be right to change an Inventory object to it's subtype (Product), it's almost like trying to convert a parent to it's child. Another case is Customer and Member. It is logical (at least for me) to think that a Member is a Customer with some more privileges. Same problem occurred when user wanted to upgrade an existing Customer to become a Member. A very trivial case is the Employee case. Where Manager, Clerk, etc can be derived from Employee. Still, the same upgrading issue. I tried to use composition instead for some cases, but I really wanted to know if I'm missing something for inheritance solution here. My composition solution for those cases : Create a reference of Inventory inside a Product. Here I'm making an assumption about that Product and Inventory is talking in a different context. While Product is in the context of sales (price, volume, discount, etc), Inventory is in the context of physical management (stock, movement, etc). Make a reference of Membership instead inside Customer class instead of previous inheritance solution. Therefor upgrading a Customer is only about instantiating the Customer's Membership property. This example is keep being taught in basic programming classes, but I think it's more proper to have those Manager, Clerk, etc derived from an abstract Role class and make it a property in Employee. I found it difficult to find an example of a concrete class deriving from another concrete class. Is there any inheritance solution in which I can solve those cases? Being new in this OOP thing, I really really need a guidance. Thanks!

    Read the article

  • Lookup Viewer

    - by Geertjan
    The Maven integrated view that I showed yesterday I was able to create because I happened to know that an implementation of SubprojectProvider and LogicalViewProvider are in the Lookup of Maven projects. With that knowledge, I was able to use and even delegate to those implementations. But what if you don't know that those implementations are in the Lookup of the Project object? In the case of the Maven Project implementation, you could look in the source code of the Maven Project implementation, at the "getLookup" method. However, any other module could be putting its own objects into that Lookup, dynamically, i.e., at runtime. So there's no way of knowing what's in the Lookup of any Project object or any other object with a Lookup. But now imagine that you have a Lookup Viewer, as a tool during development, which you would exclude when distributing the application. Whenever new objects are found in the Lookup, the viewer displays them. You could install the Lookup Viewer into NetBeans IDE, or any other NetBeans Platform application, and then get a quick impression of what's actually in the Lookup when you select a different item in the application during development. Here it is (though I vaguely remember someone else writing something similar): Above, a Maven Project is selected. The Lookup Window shows that, among many other classes, an implementation of SubprojectProvider and LogicalViewProvider are found in the Lookup when the Maven Project is selected. If an item in the Lookup Window has its own Lookup, the content of that Lookup is displayed as child nodes of the Lookup, etc, i.e., you can explore all the way down the Lookup of each item found within objects found within the current selection. (What's especially fun is seeing the SaveCookieImpl being added and removed from the Lookup Window when you make/save a change in a document.) Another example is below, showing the Lookup Window installed in a custom application created during a course at MIT in Boston: A small trick I had to apply is that I always show the previous Lookup, since the current Lookup, when you select one of the Nodes in the Lookup Window, would be the Lookup of the Lookup Window itself! If anyone is interested in this, I can publish the NetBeans module providing the above window to the NetBeans update center. 

    Read the article

  • WebCenter Customer Spotlight: Azul Brazilian Airlines

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) is the third-largest airline in Brazil serving  42 destinations with a fleet of 49 aircraft and employs 4,500 crew members. The company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. To this end, Azul implemented Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Azul can now complete the Web site content updating process—which used to take approximately 48 hours—in less than five minutes. Company OverviewAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) has established itself as the third-largest airline in Brazil, based on a business model that combines low prices with a high level of service. Azul serves 42 destinations with a fleet of 49 aircraft. It operates 350 daily flights with a team of 4,500 crew members. Last year, the company transported 15 million passengers, achieving a 10% share of the Brazilian market, according to the Agência Nacional de Aviação Civil (ANAC, or the National Civil Aviation Agency). Business ChallengesThe company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. Provide customers with an  innovative Web site with a simple process for purchasing flight tickets Bring dynamism to the Web site’s content updating process to provide autonomy to the airline’s strategic departments, such as marketing and product development Facilitate integration among the site’s different application providers, such as ticket availability and payment process, on which ticket sales depend Solution DeployedAzul worked with the  Oracle partner TQI to implement Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Previously, at least three servers and corporate information environments had directed data to the portal. The single Oracle-based platform now facilitates site updates, which are daily and constant. Business Results Gained development freedom in all processes—from implementation to content editing Gathered all of the Web site’s key information onto a single platform, facilitating its daily and constant updating, whereas the information was previously spread among at least three IT environments and had to go through a complex process to be made available online to customers Reduced time needed to update banners and other Web site content from an average of 48 hours to less than five minutes Simplified the flight ticket sales process thanks to tool flexibility that enabled the company to improve Website usability “Oracle WebCenter Sites provides an easy-to-use platform that enables our marketing department to spend less time updating content and more time on innovative activities. Previously, it would take 48 hours to update content on our Web site; now it takes less than five minutes. We have shown the market that we are innovators, enabling customer convenience through an improved flight ticket purchase process.” Kleber Linhares, Information Technology and E-Commerce Director, Azul Linhas Aéreas Brasileiras Additional Information Azul Brazilian Airlines Case Study Oracle WebCenter Sites Oracle WebCenter Sites Satellite Server

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • What kind of position matches my skills, experience and interests? [closed]

    - by Ryan
    I work in a large firm and my current job covers a variety of different duties. Due to several factors I am seriously considering finding a new job (hours, pay-cut, limited career growth). I have worked for the company nearly 4 years and almost 2 years ago I transitioned into more of a business analyst role (previously I was working in a client facing role for our audit group). In this role I have overseen all aspects of the development of a large scale re-platforming of our firm's key tool in analyzing investment portfolios. I gathered requirements, wrote specs, designed the UI and functionality, worked closely with developers (onshore and offshore) to see to it the implementation was correct, managed schedules and was the lead tester. This is a large scale system used by thousands of people around the world. I've also written Excel macros, reports in SQL, given trainings, written technical manuals, interfaced with senior managers and partners, etc. I've been on a couple interviews sporadically, most of which were for positions aimed at higher management consulting type positions, dealing with strategy, overall process management, project management, etc. What really interests me is the technical stuff and overseeing a project from beginning to end (although I would rather not have to do so many of the tasks on my own). I genuinely like a lot of what I do, but the company culture and attitude towards overworking people combined with my recent pay-cut (my overtime was cut due to a promotion to a higher level) has lead me to want to seek work elsewhere. The problem is - what type of work could I realistically do? I feel like traditional business analysis is too much business and not enough tech stuff, and I've really taken a shine lately to beefing up my programming abilities and creating small programs to automate things around work. I also feel that because my actual years of experience as a business analyst (figure 1.5 years realistically) puts me at a junior level doing a lot of grunt requirements gathering, when the work that I have been doing with my current company is more in line with what a Program Manager does (depending on your definition I guess). So in reality, when I'm job hunting I get a bit perplexed because I feel like the traditional BA stuff wouldn't really suit me, and even if it did it's usually something along the lines of 5-10 years experience for the type of work that is similar to what I've done (and I've also found most BA jobs to be contract only which at the moment I'm not too keen on). Program Manager is something that interests me, but again I feel like the experience is lacking because that's a much more senior position. Am I in some kind of career no-man's land? Any idea what would best suit me given my experience and abilities, as well as my interests? I plan to keep learning programming on the side, but don't expect to get a job being a straight programmer given my relative inexperience with programming.

    Read the article

  • How are you coping with Ubuntu's Unity app launcher? (It auto-hides, can't minimize apps)

    - by Bad Learner
    [Firstly, let me tell you that this cannot be subjective in anyway, as I think at least Ubuntu beginners will have these questions boggling in their mind; and yes, this is a question that has a definite answer - - so, I am completely within the rules.] Okay, coming to the point, I see that Ubuntu uses Unity since v10.xx (netbook edition?) and carried the same to v11.04 & v11.10. As someone who's stuck to Windows for all these years, it's somewhat difficult to cope with Ubuntu's Unity, for the following reasons: [1] The Unity app launcher (to the screen's left) auto-hides when a window is maximized. [2]- And once launched, apps cannot be minimized by clicking the app's icon in the launcher. I have to go to the top-left of the screen and click the "_" button. I do know I can fix these issues by installing some configuration tool. But the thing is, if that's how it's meant to work, Canonical/Ubuntu would have designed it that way. But they didn't. Why? w.r.t above points [1], [2]: [1] EDITED: So, does it mean, it's good to work without maximizing the windows? Because if I maximize the window, the app launcher hides. And I need to hover the mouse to the left of the screen, wait a bit (even if it's a sec or even less, I can still feel the lag), and then click on the next app icon in the launcher to switch to it. I do know, I can use Alt+TAB to switch, but I am not sure which window comes next. This, I feel, isn't productive. Also, this makes me feel, Ubuntu is designed for large screens (it's nice on my 1920x1080p screen), because I can have two windows side-by-side or something like that on a large screen. This is not possible on smaller screens. [2]- Being able to minimize an application's window by clicking on its icon in the launcher (just like it works on Windows & probably elsewhere) would have been great, rather than having to go to the top-left and clicking the _ (minimize) button which brings up the app launcher itself (from hiding) most of the time. It's too tiring to have these small issues in the UI. I really would like to know how you are coping with these issues the way they are?

    Read the article

  • Fail to start windows after Ubuntu 11.10 install

    - by user49995
    Computer: HP Pavilion dv7-6140eo OS: Originally Win7 I recently decided to try out Ubuntu, and I decided to dual-boot it with Windows 7. First I googled some how-to's, then I downloaded Ubuntu onto a memory stick and made a second partition (I originally only had one partition that I shrunk and used the unallocated space to install onto during the Ubuntu install). During the install I set format type to xt4 (or something, it was the default option), chose the "in the beginning" option and set the last option as "\". The install was successful. Although, when I restarted my computer I weren't able to choose which operating system to start; it went right into windows. After showing the windows logo for half a second before rebooting, I get a blue screen (see bottom of the page). Trying to fix it, I deleted the newly made partition I had just installed Ubuntu onto (seeing it wasn't working either). This made no difference. I proceeded with installing Ubuntu again, so I would at least have a functioning computer, and now Ubuntu works fine (on it now). The only difference on start-up is that I get a Grub window asking me to between several options including Linux and Windows 7 (loader). Now, if I choose Windows 7, I get the message "Windows was unable to start. A recent software or hardware change might be the cause". It recommends me to choose the first option of the two it provides; to start start-up repair tool. The second option being starting windows normally. If I start windows normally, the same thing happens as earlier. My computer does not have a windows installation CD. Although, it has (at least it used to, if I haven't screwed that too up) a 17gb recovery partition. In addition I made an image of the computer onto a external hard drive when I first got it. Though, I have no idea how to use either. If anyone has any idea how I can make windows work again or reinstall it (already backed up my files) it would be greatly appreciated. I still prefer to dual boot between the two functioning operating systems, but I will settle for a functioning windows 7. Thanks a lot for any replies. Blue screen: A problem has been detected and Windows has been shut down to prevent damage to your computer. If this is the first time you've seen this Stop error screen, restart your computer. If this screen appears again, follow these steps: Check for viruses on your computer. Remove and newly installed hard drives or hard drive controllers. Check your hard drive to make sure it is properly configures and terminated. Run CMKDSK /F to check for hard drive corruption, and then restart your computer. Technical information: **STOP: 0x0000007B (0xFFFFF880009A97E8,0xFFFFFFFFC0000034, 0x0000000000000000,0x0000000000000000

    Read the article

  • How can I diagnose/debug "maximum number of clients reached" X errors?

    - by jmtd
    Hi, I'm hitting a problem whereby X prevents processes from creating windows, uttering something like the following into ~/.xsession-errors: cannot open display: :0.0 Maximum number of clients reached Searching around there are lots of examples of people facing this problem, and sometimes people identify which program they are running is using up all the client slots. See e.g. LP 70872 (Firefox), LP 263211 (gnome-screensaver). For what it's worth, I run gnome-terminal, thunderbird, chromium-browser, empathy, tomboy and virtualbox nearly all the time, on top of the normal stuff you get with the GNOME desktop, and occasionally some other bits and pieces. However my question is not "which of my programs is causing this problem" but rather, how can one go about diagnosing this problem? In the above (and other) bugs, forum reports, etc., a number of tools are suggested: xlsclients - lists the client applications for the given display, but I don't think that corresponds to 'X clients' xrestop - a top-style X resources tool, one row per X client. Lots of '' clients, not shown in xlsclients output xwininfo -root -children lists X window objects From what I can gather, the problem might not be too many clients at all, but rather resources kept around in the X server for clients who have long-since detached. But it would also appear that you cannot (easily?) relate X resources back to their client. Can one effectively diagnose this issue once it has started to occur, or is a tedious divide-and-conquer approach for the apps I run the only approach open to me? Update Jan 2011: I think I have resolved this issue. For the benefit of anyone stumbling across this, nautilus and/or compiz or something in that chain of software was segfaulting due to a wallpaper I had. I had chosen an XML file as my wallpaper, which defined a rotating gallery of images. It was hand-made, but based on /usr/share/backgrounds/contest/background-1.xml or similar. Disabling the wallpaper and I have not had a crash since. I'm not marking this as answered yet, since the actual specific problem was not my question, but how to diagnose it was. Unfortunately this was mostly trial-and-error which sucks.

    Read the article

  • OOF checklist

    - by Daniel Moth
    When going on vacation or otherwise being out of office (known as OOF in Microsoft), it is polite and professional that our absence creates the minimum disruption possible to the rest of the business, and especially our colleagues. Below is my OOF checklist - I try to do these as soon as I know I'll be OOF, rather than leave it for the night before. Let the relevant folks on the team know the planned dates of absence and check if anybody was expecting something from you during that timeframe. Reset expectations with them, and as applicable try to find another owner for individual activities that cannot wait. Go through your calendar for the OOF period and decline every meeting occurrence so the owner of the meeting knows that you won't be attending (similar to my post about responding to invites). If it is your meeting cancel it so that people don’t turn up without the meeting organizer being there. Do this even for meetings were the folks should know due to step #1. Over-communicating is a good thing here and keeps calendars all around up to date. Enter your OOF dates in whatever tool your company uses. Typically that is the notification to your manager. In your Outlook calendar, create a local Appointment (don't invite anyone) for the date range (All day event) setting the "Show As" dropdown to "Out of Office". This way, people won’t try to schedule meetings with you on that day. If you use Lync, set the status to "Off Work" for that period. If you won't be responding to email (which when on your vacation you definitely shouldn't) then in Outlook setup "Automatic Replies (Out of Office)" for that period. This way people won’t think you are rude when not replying to their emails. In your OOF message point to an alias (ideally of many people) as a fallback for urgent queries. If you want to proactively notify individuals of your OOFage then schedule and send a multi-day meeting request for the entire period. Remember to set the "Show As" to "Free" (so their calendar doesn’t show busy/oof to others), set the "Reminder" to "None" (so they don’t get a reminder about it), set "Low Importance", and uncheck both "Response Options" so if they don't want this on their calendar, it is just one click for them to get rid of it. Aside: I have another post with advice on sending invites. If you care about people who would not observe the above but could drop by your office, stick a physical OOF note at your office door or chair/monitor or desk. Have I missed any? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • Handling bugs, quirks, or annoyances in vendor-supplied headers

    - by supercat
    If the header file supplied by a vendor of something with whom one's code must interact is deficient in some way, in what cases is it better to: Work around the header's deficiencies in the main code Copy the header file to the local project and fix it Fix the header file in the spot where it's stored as a vendor-supplied tool Fix the header file in the central spot, but also make a local copy and try to always have the two match Do something else As an example, the header file supplied by ST Micro for the STM320LF series contains the lines: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... __IO uint16_t BSRRL; /* BSRR register is split to 2 * 16-bit fields BSRRL */ __IO uint16_t BSRRH; /* BSRR register is split to 2 * 16-bit fields BSRRH */ .... } GPIO_TypeDef; In the hardware, and in the hardware documentation, BSRR is described as a single 32-bit register. About 98% of the time one wants to write to BSRR, one will only be interested in writing the upper half or the lower half; it is thus convenient to be able to use BSSRH and BSSRL as a means of writing half the register. On the other hand, there are occasions when it is necessary that the entire 32-bit register be written as a single atomic operation. The "optimal" way to write it (setting aside white-spacing issues) would be: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... union // Allow BSRR access as 32-bit register or two 16-bit registers { __IO uint32_t BSRR; // 32-bit BSSR register as a whole struct { __IO uint16_t BSRRL, BSRRH; };// Two 16-bit parts }; .... } GPIO_TypeDef; If the struct were defined that way, code could use BSRR when necessary to write all 32 bits, or BSRRH/BSRRL when writing 16 bits. Given that the header isn't that way, would better practice be to use the header as-is, but apply an icky typecast in the main code writing what would be idiomatically written as thePort->BSRR = 0x12345678; as *((uint32_t)&(thePort->BSSRH)) = 0x12345678;, or would be be better to use a patched header file? If the latter, where should the patched file me stored and how should it be managed?

    Read the article

  • From Co-op to fulltime help with salary negotation [closed]

    - by Peter
    Hey I'm a coop student that worked at a particular medium size printing company for 8 months. I had a good time it was lax, sometimes insufficiently challenging but none the less I learned a whole lot. I stuck with them for another 5 months (including this month) at the same rate I was paid then, doing testing work, tool development, taking care of emergencies when the lead developers were away, and other smaller projects and now bigger projects and problem handling (bad printer output etc.). I know their website inside out (ecommerce), and I know their printing software inside out and have made many changes to them both without a hitch. I have also done a lot of refactoring of the existing code base which as far as Im concerned, I believe am the only one to do those sorts of restructuring even though there is constant talk about it. I guess the unit testing paid off and lets me see the value in modularity if even a tad more. Never the less I have faith in my skill and the restructuring I did turned out better than I had imagined . Now the problem is that I finish school next month and so I asked for a full time spot the month after. They have been expanding and have hired a new guy a few months after my coop spot, and just now they hired a new guy to deal with the CRM application. The lead developer who wrote all of the software had left 5 months ago so it was up to all of us to learn what he had done over 4 years (including db, networking). So now I'm afraid that if I assert myself for a salary similar to the other guys, which I believe I am certainly on par with, that I would be seen as ingrateful. It's hard to flip a switch and say, hey double my pay, although when I'm working with their bread and butter (printers) and writing new features, refactoring the whole application for extensibility. I love it regardless of pay. I also feel maybe I'm replaceeble, although nobody knows the website better than myself and the lead web dev (not by a long shot), and nobody knows the printer software/drivers better than myself. I just thought they would have brought up a raise earlier on, and now it feels like they don't value my work. I'm also tired of worrying about it. I think my question is, well what do I do next?

    Read the article

  • Block-level deduplicating filesystem

    - by James Haigh
    I'm looking for a deduplicating copy-on-write filesystem solution for general user data such as /home and backups of it. It should use online/inline/synchronous deduplication at the block-level using secure hashing (for negligible chance of collisions) such as SHA256 or TTH. Duplicate blocks need not even touch the disk. The idea is that I should be able to just copy /home/<user> to an external HDD with the same such filesystem to do a backup. Simple. No messing around with incremental backups where corruption to any of the snapshots will nearly always break all later snapshots, and no need to use a specific tool to delete or 'checkout' a snapshot. Everything should simply be done from the file browser without worry. Can you imagine how easy this would be? I'd never have to think twice about backing-up again! I don't mind a performance hit, reliability is the main concern. Although, with specific implementations of cp, mv and scp, and a file browser plugin, these operations would be very fast, especially when there is a lot of duplication as they would only need to transfer the absent blocks. Accidentally using conventional copy tools that do not integrate with the FS would merely take longer, waste some bandwidth when copying remotely and waste some CPU, as the duplicate data would be re-read, re-transferred and re-hashed (although nothing would be re-written), but would absolutely not corrupt anything. (Some filesharing software may also be able to benefit by integrating with the FS.) So what's the best way of doing this? I've looked at some options: lessfs - Looks unmaintained. Any good? [Opendedup/SDFS][3] - Java? Could I use this on Android?! What does [SDFS][4] stand for? [Btrfs][5] - Some patches floating around on mailing list archives, but no real support. [ZFS][6] - Hopefully they'll one day relicense under a true Free/Opensource GPL-compatible licence. Also, 2 years ago I had a go at an attempt in Python using Fuse at the file-level to be used over the top of a typical solid FS such as EXT4, but I found Fuse for Python underdocumented and didn't manage to implement all of the system calls. My first post here, so I can't post more than 2 links until I get over 10 rep: [3]: http://www.opendedup.org/ [4]: https://en.wikipedia.org/w/index.php?title=SDFS&action=edit&redlink=1 [5]: https://en.wikipedia.org/wiki/Btrfs#Features [6]: https://en.wikipedia.org/wiki/ZFS#Linux

    Read the article

  • Career advice: stay with PHP or start a new career in something else ( .Net?)

    - by Christian P
    I'm planning on moving to NY in 6-12 months tops, so I'm forced to find a new job. When I'm planing to start my life in another city it's also probably a good time to think about career changes. I've found a lot of different opinions about PHP vs .Net vs Java and this is not topic here. I don't want to start a new fight about which language is better. Knowing programming language is not the most important thing for being a software developer. To be a really good developer you need to know OOP, design patterns, testing... and language is just a tool to make things happen. So back to my question. I have mixed experience in IT - 1 year as an IT support guy (Windows administration and support), around 2 years of experience in embedded programming (VB.Net 2005) and for the last 2 years I'm working with PHP/MySQL. I have worked with Magento web shop, assisted in some projects in Symfony, modified few Drupal sites. My main concerns are following: Do I continue to improve my skills in PHP e.g. to start learning some major PHP framework like Zend, Symfony maybe get some PHP certification. Or do I start learning .NET or Java. I'm more familiar to .NET so I'll probably choose it if choice falls between .NET and Java ( or you could convince me to choose Java :). Career-wise, I don't know what is the best choice. Learning new framework and language is more time consuming then improving my existing skills in PHP. But with .NET you have a lot of possibilities (Windows 7 Phone development, Silverlight, WPF) and possibly bigger chances to find better jobs. PHP jobs are less payed then .NET, at least, according to my researches (correct me if I'm wrong). But if I start now with .NET I'm just a beginner and my salary will be low. I need at least 2+ years of experience in some language to even try to find some job that is paying higher than $50-60k in NY. My main goal in next 2-3 years is to try to find a job in a $60-80k category. Don't get me wrong, I'm not just chasing money, but money is an important factor when you're trying to start a family. I'm 27 years old and I feel that there isn't a lot of room for wrong decisions regarding my career, so any advice will be very welcome. Update Thank you all for spending time to help me with my problem. All of the answers and comments have been very helpful. I have decided to stick with PHP but also to learn C# and Silverlight 4. We'll see where the life will take me.

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • Problem Solving vs. Solution Finding

    - by ryanabr
    By enlarge, most developers fall into these two camps I will try to explain what I mean by way of example. A manager gives the developer a task that is communicated like this: “Figure out why control A is not loading on this form”. Now, right there it could be argued that the manager should probably have given better direction and said something more like: “Control A is not loading on the Form, fix it”. They might sound like the same thing to most people, but the first statement will have the developer problem solving the reason why it is failing. The second statement should have the developer looking for the solution to make it work, not focus on why it is broken. In the end, they might be the same thing, but I usually see the first approach take way longer than the second approach. The Problem Solver: The problem solver’s approach to fixing something that is broken is likely to take the error or behavior that is being observed and start to research it using a tool like Google, or any other search engine. 7/10 times this will yield results for the most common of issues. The challenge is in the other 30% of issues that will take the problem solver down the rabbit hole and cause them not to surface for days on end while every avenue is explored for the cause of the problem. In the end, they will probably find the cause of the issue and resolve it, but the cost can be days, or weeks of work. The Solution Finder: The solution finder’s approach to a problem will begin the same way the Problem Solver’s approach will. The difference comes in the more difficult cases. Rather than stick to the pure “This has to work so I am going to work with it until it does” approach, the Solution Finder will look for other ways to get the requirements satisfied that may or may not be using the original approach. For example. there are two area of an application of externally equivalent features, meaning that from a user’s perspective, the behavior is the same. So, say that for whatever reason, area A is now not working, but area B is working. The Problem Solver will dig in to see why area A is broken, where the Solution Finder will investigate to see what is the difference between the two areas and solve the problem by potentially working around it. The other notable difference between the two types of developers described is what point they reach before they re-emerge from their task. The problem solver will likely emerge with a triumphant “I have found the problem” where as the Solution Finder will emerge with the more useful “I have the solution”. Conclusion At the end of the day, users are what drives features in software development. With out users there is no need for software. In todays world of software development with so many tools to use, and generally tight schedules I believe that a work around to a problem that takes 8 hours vs. the more pure solution to the problem that takes 40 hours is a more fruitful approach.

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • Moms on Mobile: Are They Way Ahead of You?

    - by Mike Stiles
    You may have no idea how much and how fast moms are embracing mobile. Of all the demographics that can be targeted by marketers, moms have always been at or near the top of the list. And why not? They’re running households, they’re all over town, they’re making buying decisions, and they’re influencing family and friends. They, out of necessity, become masters of efficiency and time management. So when a technology tool, like mobile, comes along that assists with that efficiency and time management, we would obviously expect them to take advantage of it. So if it’s obvious, why are so many big, sophisticated brands left choking on the dust of moms who have zoomed past them in the adoption of mobile, and social on mobile? Let’s break down some hard truths as presented by a Mojiava report: -Moms spend 6.1 hours per day on average on their smartphones – more than magazines, TV or radio. -46% took action after seeing a mobile ad. -51% self-identify as “addicted” to their smartphone. -Households with an income of $25K-$50K have about the same mobile penetration among moms as those with incomes of $50K-$75K. So mobile is regarded as a necessity for middle-class moms. -Even moms without smartphones spend 2.5 hours on average per day on some connected mobile device. -Of moms with such devices, 9.8% have an iPad, 9.5% a Kindle and 5.7% an iPod Touch. -Of tablet-owning moms, 97% bought something using their tablet in the last month. -31% spend over 10 hours per week on their tablet, but less than 2 hours per week on their PCs. -62% of connected moms use shopping apps. -46% want to get info on their mobile while in a store. -Half of connected moms use social on their mobile. And they’re engaged. 81% are brand fans, 86% post updates, and 84% comment. If women and moms are one of your primary targets and you find yourself with no strong social channels where content is driving engagement and relationship-building, with sites not optimized for mobile, or with no tablet or smartphone apps, you have been solidly left behind by your customers and prospects. And their adoption of mobile and social on mobile is only exponentially speeding up, not slowing down. How much sense does it make when your customer is ready to act on your mobile ad, wants to user your iPad app to buy something from you, wants to be your fan on Facebook, wants to get messages and deals from you while they’re in your store…but you’re completely absent? I’ll help you cheat on the test by giving you the answer…no sense at all. Catch up to momma.

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >