Search Results

Search found 17621 results on 705 pages for 'just my correct opinion'.

Page 125/705 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • SAP or Navision? Career Path

    - by codebased
    This could be tricky to ask; I may or may not ask this question here but I thought to give it a try. I've been in Software Industry since 2002 and now it has been a time that I'm at Senior level where I normally code, lead and define the architect; giving technical solutions to the management is one of my asset that I've earned during my services. Now it is the time to define the road map for the future, $$$. I am not in favor of Project Management roles. I've been thinking of going through the ERP and my current company does provide me an option to go for Navision/ Microsoft Dynamics. They are currently on 4.0 but they are planning to move for 2009 and also to build one of their own plug-in. Indeed the option is good because Microsoft is trying to accomplish the market for Dynamics products. However, they have less success in Australia. Now, Another option is with SAP where person can go with 200 K $ a year. Where as I'd doubt that if the same kind of growth, financial, is available for Microsoft geek. What is your opinion on Navision or SAP? If I try to completely move to SAP it could be bit challenging as market will consider me a fresher. However the return is quite good. Where in case of Microsoft, I think technology changes so fast that there is a less chance to grow in, within, the same experience; in other word, if any new framework comes in .net then market look for that person who knows this new framework and not .net But in case of SAP, where the base remain same and chances are to grab more money from the market. What would you do if you were me? In stackoverflow - Navision questions are 20+ where in SAP 200+///?? :-)

    Read the article

  • Project Codenames - Yea or Nay?

    - by rmx
    Where I work, most of our projects have (or at least attempt) descriptive, useful names. However we have a few with names that make no sense: I found that an assembly named WiFi which actually has nothing whatsoever to do with wi-fi, but is a codename. When I asked why, I was told that it's to protect company secrets incase some intern has few too many at the pub on Friday and starts chatting about the brand new 'WiFi' project he's been working on. Its clear that some people find enjoyment in finding silly / amusing codenames for their projects (like in this question). My question is: is it really a good idea to use codenames for your projects or are you better off spending the time to decide upon a descriptive name? My opinion is that in the long-run its better to give your projects relevant names. My reasoning is that if you can't think of a decent name, perhaps you don't really know the requirements well enough. I think there are better ways to 'protect company secrets' and I find it quite confusing when the name does not correlate at all with the content. It's just common sense, surely?! So do you use codenames and what the your reasons for or against this seemingly common, yet annoying (to me at least) practice?

    Read the article

  • ASP.NET MVC Cookbook &ndash; public review

    - by Andrew Siemer - www.andrewsiemer.com
    I have recently started writing another book.  The topic of this book is ASP.NET MVC.  This book differs from my previous book in that rather than working towards building one project from end to end – this book will demonstrate specific topics from end to end.  It is a recipe book (hence the cookbook name) and will be part of the Packt Publishing cookbook series.  An example recipe in this book might be how to consume JSON, creating a master /details page, jquery modal popups, custom ActionResults, etc.  Basically anything recipe oriented around the topic of ASP.NET MVC might be acceptable.  If you are interested in helping out with the review process you can join the “ASP.NET MVC 2 Cookbook-review” group on Google here: http://groups.google.com/group/aspnet-mvc-2-cookbook-review Currently the suggested TOC for the project is listed.  Also, chapters 1, 2, and most of 8 are posted.  Chapter 5 should be available tonight or tomorrow. In addition to reporting any errors that you might find (much appreciated), I am very interested in hearing about recipes that you want included, expanded, or removed (as being redundant or overly simple).  Any input is appreciated!  Hearing user feedback after the book is complete is a little late in my opinion (unless it is positive feedback of course). Thank you!

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • What You Said: Desktop vs. Web-based Email Cients

    - by Jason Fitzpatrick
    We clearly tapped into a subject you all have a strong opinion about with this week’s Ask the Readers post; read on to see how your fellow readers manage their email on, off, and across desktops and devices. Earlier this week we asked you to share your email workflow and you all responded in force. TusconMatt doesn’t miss desktop clients one bit: Switched to Gmail years ago and never looked back. No more losing my emails and contacts if my HDD crashes or when I reinstall. No more frustration with not being able to access an email on the road because it downloaded to my drive and deleted from the server. No more mailbox full messages because I left messages on the server to avoid the above problem! I love having access to all emails from anywhere on any platform and don’t think I could ever go back to a dedicated email client. How To Play DVDs on Windows 8 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives?

    Read the article

  • How not to suffer from ideologists when you're a pragmatic person?

    - by Lukas Eder
    My story: I'm a pragmatic person. Sometimes, the most simple solution to a problem to get the job done is the one that fits best for me, if its not an utter blasphemy and reproach to any design principles. Check out my answer to this question on stackoverflow. Simple. Works. Was accepted. Could be improved. Is clearly not perfect. And along comes this guy. He downvotes me, comments on the question how his answer is better, more accurate etc and calls me "plain wrong". Reminds me of this comic strip. :-) While on stackoverflow I can laugh at these things because those people are far away, in the real world I'm suffering from ideologies every now and then. Heck, I'm not creating a miracle piece of software, I need to keep that huge legacy thing running, and it's an adventure to me every day. I don't have the time or passion to beautify my code (or other people's code) to that extent. My question(s): How do you deal with ideologies / ideologists, when you're a pragmatic person? How do you deal with pragmatism / pragmatists, when you're an ideologic person? I'm interested in both point of views. Tell me your experience. But please, be fair, somewhat objective, and understand that you may NOT be entirely correct and your opinion is NOT the only true one... :-)

    Read the article

  • How-To Geek Gets the Microsoft MVP Award, Thanks to You

    - by The Geek
    The How-To Geek has won a Microsoft MVP award for the second year in a row, and it’s all thanks to you, our great readers that keep the site going. Join us for some mutual back-patting and some terrible photography of all the award stuff. Of course, if you’re familiar with the MVP award you’ll probably know that it’s actually for a single person, but in my opinion the award belongs to the entire How-To Geek community, without which this site would be nothing. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Five Sleek Audi R8 Car Themes for Chrome and Iron MS Notepad Replacement Metapad Returns with a New Beta Version Spybot Search and Destroy Now Available as a Portable App (PortableApps.com) ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing

    Read the article

  • BI&EPM in Focus - November 2011

    - by Mike.Hallett(at)Oracle-BI&EPM
    Enterprise Performance Management A Thing of Beauty, by Alison WeissAvon’s enterprise performance management system delivers accurate information and critical insight to managers at every level of the organization Oracle Crystal Ball Helps Managers Guard Against Volatility, by Alison Weiss The Insight Game, by Aaron LazenbyEnterprise performance management can deliver insights crucial to navigating the volatility of the global economy—and that’s no game of checkers. KPI vs. the Bottom Line, by Edward RoskeFor managers, is tracking the key metrics for their departments enough to ensure success for the entire business? The CEO for Oracle partner interRel shares his opinion. Deep Integration, by Aaron LazenbyThe synthesis of Oracle Hyperion applications and core Oracle technologies can deliver deep benefits to analytics-driven businesses. Oracle Crystal Ball. Oracle's #1 Solution for Risk Management Follow EPM Documentation at Hyperion EPM Info for news about EPM documentation releases and updates (twitter | facebook | Linkedin) Whitepaper: Integrating XBRL Into Your Financial Reporting Process Oracle Hyperion Disclosure Management Customer Story: StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Sherwin-Williams Argentina I.C.S.A. Accelerates Budget Preparation Process by 75% BBDO Germany GmbH Consolidates Financial and Planning Processes for More Than 50 Agencies StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Business Intelligence Webcast Replay: Oracle Data Mining & BI EE - Predictive Analytics (Part 2) Innovation Award Winners - BI/EPM: HealthSouth, State of MD, Clorox Company, Telenor and Dunkin Brands Leeds Teaching Hospitals National Health Service Trust Builds Budget Reports Six Times Faster, Achieves 100% ROI in 12 Months with Oracle Business Intelligence Home Credit Group Consolidates Reporting and Saves Time across All Business Units w/ Oracle Essbase & OBIEE Autoglass Improves Business Visibility and Services to Customers and Partners with Oracle Business Intelligence Events Download Oracle OpenWorld Oct 2011 Presentations select Middleware - BI or Applications - Hyperion Oracle Business Analytics Summits:learn about the latest trends, best practices, and innovations in business intelligence, analytics applications, and data warehousing Webcast Nov 15 9am PST: Running the Last Mile, Beyond Financial Consolidations - Streamlining the Close and Addressing the SEC's XBRL Mandate Webcast Dec 13 1pm PST: Defining Your Mobile BI Strategy (BICG) New Training Available: Oracle BI Publisher 11g R1: Fundamentals Webcast Replay: How to Expand the Usage of Analytics in your Organization while Driving Down IT Spend Webcast Replay: Real-Time Decisions (RTD) Updated Use Cases for Ecommerce Personalization in Financial Services & Retail

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Is defining every method/state per object in a series of UML diagrams representative of MDA in general?

    - by Max
    I am currently working on a project where we use a framework that combines code generation and ORM together with UML to develop software. Methods are added to UML classes and are generated into partial classes where "stuff happens". For example, an UML class "Content" could have the method DeleteFromFileSystem(void). Which could be implemented like this: public partial class Content { public void DeleteFromFileSystem() { File.Delete(...); } } All methods are designed like this. Everything happens in these gargantuan logic-bomb domain classes. Is this how MDA or DDD or similar usually is done? For now my impression of MDA/DDD (which this has been called by higherups) is that it severely stunts my productivity (everything must be done The Way) and that it hinders maintenance work since all logic are roped, entrenched, interspersed into the mentioned gargantuan bombs. Please refrain from interpreting this as a rant - I am merely curious if this is typical MDA or some sort of extreme MDA UPDATE Concerning the example above, in my opinion Content shouldn't handle deleting itself as such. What if we change from local storage to Amazon S3, in that case we would have to reimplement this functionality scattered over multiple places instead of one single interface which we can provide a second implementation for.

    Read the article

  • Animating DOM elements vs refreshing a single Canvas

    - by mgibsonbr
    A few years ago, when the HTML Canvas element was still kinda fresh, I wrote a small game in a rather "unusual" way: each game element had its own canvas, and frequently animated elements even had multiple canvases, one for each animation sprite. This way, the translation would be done by manipulating the DOM position of the canvases, while the sprite animation would consist of altering the visibility of the already drawn canvases. (z-indexes, of course, were the tricky part) It worked like a charm: even in IE6 with excanvas it showed a decent performance, and everything was rather consistent between browsers, including some smartphones. Now I'm thinking in writing a larger game engine in the same fashion, so I'm wondering whether it would be a good idea to do so in the current context (with all the advances in browsers and so on). I know I'm trading memory for time, so this needs to be customizable (even at runtime) for each machine the game will be running. But I believe using separate canvases would also help to avoid the game "freezing" on CPU spikes, since the translation would still happen even if the redraws lag for a while. Besides, the browsers' rendering engines are already optimized in may ways, so I'm guessing this scheme would also reduce the load on the CPU (in contrast to doing everything in JavaScript - specially the less optimized ones). It looks good in my head, but I'd like to hear the opinion of more experienced people before proceeding further. Is there any known drawback of doing this? I'm particulartly unexperienced in dealing with the GPU, so I wonder whether this "trick" would nullify any benefit of using a single, big canvas. Or maybe on modern devices it's overkill (though I'm skeptic about the claims that canvas+js - especially WebGL - will ever be a good alternative to native code). Any thoughts?

    Read the article

  • deep expertise in one technology or not so deep understanding of many technologies

    - by district
    Hello everyone. I started to feel a little bit confused recently about my career path as software developer, about what I do, what I know and do I need it. I am 21 years now and I have 3 years of experience. I've been dealing with java/C++ projects, Servlet/JSP/JSF, desktop QT, also some mobile development (Symbian, Android) I work for a quite a small company, around 20 developers with different projects. I'm also a student. The problem is that I'm not sure if I'm taking the right road here. I'm starting to work with new technology every few months. I don't have deep understanding in any of these and I'm not sure if this is what I need. I will probably not become an expert in any of these. The other path is maybe to start working for a big company which use one set of technologies and become an expert. What's your opinion on this topic ? What is more valuable ?

    Read the article

  • Akka react vs receive

    - by Will I Am
    I am reading my way through Akka tutorials, but I'd like to get my feet wet with a real-life scenario. I'd like to write both a connectionless UDP server (an echo/ping-pong service) and a TCP server (also an echo service, but it keeps the connection open after it replies). My first question is, is this a good experimental use case for Akka, or am I better served with more common paradigms like IOCP? Would you do something like this with Akka in production? Although I understand conceptually the difference between react() and receive(), I struggle to choose one or the other for the two models. In the UDP model, there is no concept of who the sender is on the server, once the pong is sent, so should I use receive()? In the TCP model, the connection is maintained on the server after the pong, so should I use react()? If someone could give me some guidance, and maybe an opinion on how you'd design these two use cases, it would take me a long way. I have found a number of examples, but they didn't have explanations as to why they chose the paradigms they did.

    Read the article

  • Advantages and Disadvantages of the Waterfall Methodology

    In my personal opinion I believe the waterfall method is one of the worst methodologies to use when developing larger systems because it leaves is no room for mistakes. As the name implies the waterfall methodology does not allow  for projects to go back up stream to recover from design errors, missing and/or limited requirements. In addition, hidden bugs are not usually found until the testing phase. This can prove to be very costly and time consuming to the developer and the client. According to NCycles.com, the waterfall methodology structures a project into separate stages with defined deliverables from each phase. Define Design Code Test Implement Document and Maintain The advantages found by Ncycle.com to this methodology are: Ease in analyzing potential changes  Ability to coordinate larger teams, even if geographically distributed Can enable precise dollar budget Less total time required from Subject Matter Experts The disadvantages found by Ncycle.com to this methodology are: Lack of flexibility Hard to predict all needs in advance Intangible knowledge lost between hand-offs Lack of team cohesion Design flaws not discovered until the Testing phase References: NCycles.com  (2002). Retrieved from http://www.ncycles.com/e_whi_Methodologies.htmmethodology on April 17, 2009

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • Release/Change management - best aproach

    - by Bob Rivers
    I asked this question an year ago in StackOverflow and never got a good answer. Since Programmers seems to be a better place to ask it, I'll give it a try... What is the better way to work with release management? More specifically what would be the best way to release packages? For example, assuming that you have a relatively stable system, a good quality assurance process (QA), etc. How do you prefer to release new versions? Let's assume that we are talking about a mid to large "centralized" web system (no clients), in-house development. This system can be considered "vital" for a corporate operations. I have a tendency to prefer to do this by releasing packets at regular intervals, not greater than 1 to 3 months. During this period, I will include into the package,fixes and improvements and make the implementation in production environment only once. But I've seen some people who prefer to place small changes in production, but with a greater frequency. The claim of these people is that by doing so, it is easier to identify bugs that have gone through the process of QA: in a package with 10 changes and another with only 1, it is much easier to know what caused the problem in the package with just one change... What is the opinion came from you?

    Read the article

  • Actually utilizing relational databases for entity systems

    - by Marc Müller
    Recently I was researching several entity systems and obviously I came across T=Machine's fantastic articles on the subject. In Part 5 of the series the author uses a relational schema to explain how an entity system is built and works. Since reading this, I have been wondering whether or not actually using a compact SQL library would be fast enough for real-time usage in video games. Performance seems to be the main issue with a full blown SQL database for management of all entities and components. However, as mentioned in T=Machine's post, basically all access to data inside the SQLDB is done sequentlially by each system over each component. Additionally, using a library like SQLite, one could easily improve performance by storing the entity data exclusively in RAM to increase access speeds. Disregarding possible performance issues, using a SQL database, in my opinion, would allow for a very intuitive implementation of entity systems and bring a long certain other benefits like easy de/serialization of game states and consistency checks like the uniqueness of entity IDs. Edit for clarification: The main question was whether using a SQL database for the actual entity management (not just storing the game state on the disk) in a real-time game would still yield a framerate appropriate for a game or even if someone is aware of projects that demonstrate SQL in a video game.

    Read the article

  • A new mission statement for my school's algorithms class

    - by Eric Fode
    The teacher at Eastern Washington University that is now teaching the algorithms course is new to eastern and as a result the course has changed drastically mostly in the right direction. That being said I feel that the class could use a more specific, and industry oriented (since that is where most students will go, though suggestions for an academia oriented class are also welcome) direction, having only worked in industry for 2 years I would like the community's (a wider and much more collectively experienced and in the end plausibly more credible) opinion on the quality of this as a statement for the purpose an algorithms class, and if I am completely off target your suggestion for the purpose of a required Jr. level Algorithms class that is standalone (so no other classes focusing specifically on algorithms are required). The statement is as follows: The purpose of the algorithms class is to do three things: Primarily, to teach how to learn, do basic analysis, and implement a given algorithm found outside of the class. Secondly, to teach the student how to model a problem in their mind so that they can find a an existing algorithm or have a direction to start the development of a new algorithm. Third, to overview a variety of algorithms that exist and to deeply understand and analyze one algorithm in each of the basic algorithmic design strategies: Divide and Conquer, Reduce and Conquer, Transform and Conquer, Greedy, Brute Force, Iterative Improvement and Dynamic Programming. The Question in short is: do you agree with this statement of the purpose of an algorithms course, so that it would be useful in the real world, if not what would you suggest?

    Read the article

  • How to pick a great working team?

    - by Javierfdr
    I've just finished my master and I'm starting to dig into the laboral world, i.e. learning how programming teams and technology companies work in the real world. I'm starting to design the idea of my own service or product based on free software, and I will require a well coupled, enthusiast and fluid team to build and the idea. My problem is that I'm not sure which would be the best skills to ask for a programming team of 4-5 members. I have many friends and acquaintances, with whom I've worked during my studies. Must of those ones I have in mind are very capable and smart people, with a good logic and programming base, although some of them have some characteristics that I believe that could influtiate negatively in the group: lack of communication, fear to debate ideas, hard to give when debating, lack of structured programming (testing, good commenting, previous design and analysis). Some of them have this negative characteristics, but must of them have a lot of enthusiasm, nice working skills (from an individual point of view), and ability to see the whole picture. The question is: how to pick the best team for a large scale project, with a lot of programming? Which of these negative skills do you think are just too influential? Which can be softened with good leadership? Wich good skills are to be expected? And any other opinion about social and programming skills of a programming team.

    Read the article

  • Microsoft Dev Days &ndash; Johannesburg 2010

    - by MarkPearl
    So I am half way through dev days in Johannesburg. It has been quite an interesting day… Maybe it is me, but this year it hasn’t been as OMG as at previous conferences. A few things that stood out though… 1) This is the first time I have had to queue in a line to use the gents toilets before – yes, a true sign that we are at a typically male dominated industry event in this country – the men’s toilets were jam packed – the ladies if there were any there didn’t have a problem. 2) Bart De Smet presentation still rocks – I am a fan of Bart’s and once again his presentation was great. Something that I am going to look into in more depth which I think is a new feature in .Net is called Code Contracts. 3) I have got to get into Silverlight more… I have known this for a long time and have dabbled in it for a while, but Silverlight in my opinion will become the main platform for “hosting” applications. So… 3 things so far, hopefully I get some OMG’s from the rest of the day…

    Read the article

  • Visual Studio 2010 Pro Power Tools Screencast

    - by Steve Michelotti
    Microsoft just released the Visual Studio 2010 Pro Power Tools extension and it is awesome. A summary of all the features can be found here and it is available in the Visual Studio Gallery here. There are a bunch of great features but, in my opinion, the best one is the replacement for the Add Reference dialog. This gives sub-string search capabilities as well as the ability to add multiple references without having to continually re-open the dialog. For this feature alone, you should install the Pro Power Tools right now. There are a few blogs posts that do a good job describing all the features but what I wanted to do here was to post a quick screencast (7 minutes) that shows the features that I think are really cool. I show most (but not all) of the features focusing on the ones I think are the best. The features I cover are: Installation with the Extension Manager Add Reference Dialog replacement Tab Well including pinned tabs, pinned tabs in second row, fixed close button, colorized tabs, dirty indicator Highlight current line Triple Click for full-line selection Ctrl + Click for Go To Definition Colorized Parameter Help Enjoy! (Right-click and Zoom to view in full screen)

    Read the article

  • Mobile App Notifications in the Enterprise Space: UX Considerations

    - by ultan o'broin
    Here is a really super website of UX patterns for Android: Android Patterns. I was particularly interested in the event-driven notification patterns (aka status bar notifications to developers). Android - unlike iOS (i.e., the iPhone) - offers a superior centralized notifications system for users.   (Figure copyright Android Patterns)   Research in the enterprise applications space shows how users on-the-go, prefer this approach, as: Users can manage their notification alerts centrally, across all media, apps and for device activity, and decide the order in which to deal with them, and when. Notifications, unlike messages in a dialog or information message in the UI, do not block a task flow (and we need to keep task completion to under three minutes). See the Anti-Patterns slideshare presentation on this blocking point too. These notifications must never interrupt a task flow by launching an activity from the background. Instead, the user can launch an activity from the notification. What users do need is the ability to filter this centralized approach, and to personalize the experience of which notifications are added, what the reminder is, ability to turn off, and so on. A related point concerning notifications is when used to provide users with a record of actions then you can lighten up on lengthy confirmation messages that pop up (toasts in the Android world) used when transactions or actions are sent for processing or into a workflow. Pretty much all the confirmation needs to say is the action is successful along with key data such as dollar amount, customer name, or whatever. I am a user of Android (Nexus S), BlackBerry (Curve), and iOS devices (iPhone 3GS and 4). In my opinion, the best notifications user experience for the enterprise user is offered by Android. Blackberry is good, but not as polished and way clunkier than Android’s. What you get on the iPhone, out of the box, is useless in the enterprise. Technorati Tags: Android,iPhone,Blackerry,messages,usablility,user assistance,userexperience,Oracle,patterns,notifications,alerts

    Read the article

  • A new clients come into my web agency. How to configure email and social accounts to work better? [on hold]

    - by Marco Panichi
    I created websites for many years but still have not found the right way to organize all the email and social accounts of every clients. I mean, every web agency follows dozens of customers. Each client needs at least Google Analytics, AdWords, a Facebook page, a Twitter profile, a Youtube channel, probably a listing on Google Places and maybe a Mail Chimp (or similar) account. The web agency, in my opinion, must own these accounts, use them to deliver results to the customer and -of course- make them available to the customer for two reasons: - The customer must be able to see how things are going - The client must have the ability to change web agency without suffering The web agency, however, has many problems in having all of these accounts. For example, I like the idea of having a Gmail account for each client and from that account use all the products of Google. But is not possible to create more than many Gmail account from the same ip address and with the same phone number. The web agency could invite the customer to create his own accounts but: - This is not necessary a value for the customer (indeed...) - The web agency would manage them, however, from the same ip address, incurring in problems - If phone verification occurs, the web agency has to disturb the customer for verification Have you the same problem? How to solve it?

    Read the article

  • International search: how to show different domains in Google+ Local?

    - by Baumr
    Background A site has multiple ccTLDs: example.com for people in the US, example.co.uk for UK users, example.de for Germany, example.fr for France, etc. Searching for certain city keywords will return a list of Google+ Local (formerly Places): Each links to the corresponding company website that is visible. Problem When searching on www.google.de, the domain of the site intended for US users (example.com) appears instead of the corresponding ccTLD (example.de) aimed at German users. This applies to all languages. In my opinion and for the purposes of this business, it's not good user experience: searchers would most likely prefer to book on a site localized for them (e.g. in their language and currency). Question Is it possible to return different ccTLDs in these local search listings for users across the globe? Currently, Google+ Local seems to only support supports adding a single "Website" field. Solutions I have considered Creating duplicate Google Places listings for each URL would be spammy (and not viable when there's 100s of locations, each needing a listing in 8 languages). I don't see the hreflang annotation helping either, and GWMT geotargeting is already set.

    Read the article

  • Multi-Resolution Mobile Development

    - by user2186302
    I'm about to start development on my first game for mobile phone (I already have a flash prototype completed so it's jsut a matter of "porting" it to mobile and fixing up the code) and plan on hopefully being able to get the game working on iphones and most android devices. I am using Haxe along with OpenFL and HaxeFlixel for development. My question is: What resolution should I design the game in initially and/or what is the best way to develop a game for multiple resolutions. I have found multiple different methods, the best, in my opinion, being strategy 3 on this page: http://wiki.starling-framework.org/manual/multi-resolution_development. However I have some questions about this. First, what would the best base resolution to use be, the guide suggests 240*320 which seems alright to me, although if I chose to use pixel graphics as I most probably will given I'm using HaxeFlixel, I'm not sure if they'll look too blocky on larger screens which I'm not even sure is a problem as it might still look alright. (Honestly, not sure about that and if anyone has any examples of games that use this method and look nice). Finally, please feel free to share whatever methods you use and think is best. For example, HaxeFlixel has a scaling feature that scales the game to fit the exact screen size, but I'm afraid that would lead to blurry and improperly scaled graphics since it would scale by non-integers. But, I'm not sure how noticeable a problem that may or may not be. Although from experience I'm pretty sure it won't look nice and currently I do not think I'm going to go for this option. So, I would really appreciate any help on this subject. Thank you in advance.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >