Search Results

Search found 3797 results on 152 pages for 'will love'.

Page 105/152 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • 6 Ways to Modernize Your Customer Experience

    - by Mike Stiles
    If customers have changed, if the way they research and shop have changed, if their expectations have changed, if their ability to act on dissatisfaction has changed, but your customer experience has NOT changed, what was once “good enough” may now be crippling. Well, the customer has changed, and why wouldn’t they? You’ve probably changed too in your role as consumer. There’s more info available, it’s easier to get, there’s more choice, you’re more mobile, you’re more connected, it’s easier to buy, and yes, it’s easier to switch brands if experiences don’t meet your now higher expectations. Thanks to technological advances, we as marketers can increasingly work borderline miracles. But if we’re still not adamantly adopting customer centricity, and if we aren’t making the customer experience paramount amongst business goals, the tech is wasted. A far more modern customer experience is called for. Here are 6 ways to get there: 1. Modern Marketing: Marketing data is aggregated and targeted to the right customers, who are getting personal, relevant communications. In return, you’re getting insight that finally properly attributes revenue to your marketing efforts. 2. Modern Selling: Demand is being driven across all channels with modern selling tools. Productivity is up thanks to coordinated communication and selling, and performance is ever optimized using powerful analytics. 3. Modern CPQ: You’re cross-selling and upselling more effectively since reps and channel partners have been empowered with the ability to quickly, automatically generate 100% accurate, customer-friendly quotes complete with price controls and automated approvals. 4. Modern Commerce: You’re leveraging data and delivering personalized, targeted digital experiences to everyone. You’re attracting more visitors, and you’re able to scale and keep up with the market and control the experience. 5. Modern Service: You’re better serving your customers by making it easier for them to engage with your brand, plus you’re lowering your costs by increasing agent and tech support efficiencies. 6. Modern Social: You’re getting faster, deeper, more accurate insights from social and turning content around faster, which then goes out to the right people at the right time in the right place. You’ve also gotten proactive in your service, and customers love that. For far too many brands, the buying journey of Need, Research, Select, Buy, Use, Recommend across the multiple connect points of Social, Mobile, Store, Call Center, Site, Ecommerce is a disconnected mess. Oracle’s approach to CX is to connect every interaction your customer has with your brand, avoiding the revenue losses lousy customer experiences bring. How important is the experience to customers? 94% are willing to pay more of their hard-earned money to have better ones, while a meager 1% say they get the good, consistent experiences they expect. Brands, your words aren’t as loud anymore, so your actions as they relate to customer experience are going to have to do the talking. @mikestiles @oraclesocialPhoto: Julien Tromeur, freeimages.com

    Read the article

  • Ubuntu Server and setting up two nic cards

    - by kmalik
    I have ubuntu server on a computer with a wireless and hardwired nic card. The wireless needs to get the internet and pass it to the ubuntu server as well as pass it along to the hardwired nic card to more computers. I am having issues getting the basic set up as I believe the route table is grabbing from the wrong nic card. The router is 192.168.1.0 and the server is set to 192.168.1.11 on the wireless card through DHCP ETH0 (wired nic card) is set up to be 10.10.10.0 and the server is 10.10.10.1) I am not a linux or networking guru but basically I am trying to have internet come from a guest network 192.168.1.0 i believe to give internet to the ubuntu server then the ubuntu server will also A) have the wired nic serve DHCP addresses to other computers via a switch or router (that acts as a switch) via 10.10.10.0 addresses. And I would love if it also passed along internet capabilities as well if possible. Bu really at this point my hope is to at least get the internet working on the server and the DHCP to pass correctly. At the moment the specific issue I am having is getting ubuntu server to connect to the internet and have both nic cards up and running correctly. Any help would be appreciated! The route table is as follows: Destination Gateway GM Flags Metric Iface 0.0.0.0 10.10.10.1 0.0.0.0 UG 100 eth0 10.10.10.0 0.0.0.0 255.255.255.0 U 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 eth0 1992.168.1.0 0.0.0.0 255.255.255.255.0 U 0 eth1 My interfaces is set up as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.10.10.1 netmask 255.255.255.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.1 domain-name-servers 192.168.1.0 auto eth1 iface eth1 inet dhcp netmask 255.255.255.0 gateway 192.168.1.0 wpa-driver wext wpa-ssid "ssid_name" wpa-ap-scan 1 wpa-proto wpa wpa-pairwise ccmp wpa-group ccmp wpa-key-mgmt wpa-psk wpa-psk "HASH" My DHCPD.conf (as there is a domain name server on here is as follows): ddns-update-style none default-lease-time 600 max-lease-time 7200 authoritative option domain-name "Kamron's Network" option subnet-mask 255.255.255.0 option broadcast-address 10.10.10.255 option routers 192.168.1.0 option domain-name-server 192.168.1.0 98.223.128.213 ooption subnet 10.10.10.0 netmask 255.255.255.0 { range 10.10.10.10 10.10.10.99 } log-facility local7

    Read the article

  • HDMI sound gone, can't figure out how to turn it back on

    - by Oli
    I have had an Acer Revo box as a media centre for a while. I recently installed Ubuntu Server (10.10) on it and polished it up with nodm (one of the most simple ways to launch an X session) and installed boxee. It's been working fine for over a month. It's just running ALSA. I've had problems with PulseAudio/Boxee/HDMI before so I wanted to keep it simple. And that worked. It pushed both PCM and digital (AAC and various Dolby codecs) over HDMI perfectly. But I restarted it the other day after mucking around with some nfs configuration and now there isn't any sound. The hardware is an ION chipset. Nvidia 9400M graphics with Nvidia MCP79/7A audio. One thing I have noticed is there doesn't appear to be any sign of a IEC958 device. A traditional fix in the past for fresh installs has been to load alsamixer, find the IEC device and toggle its mute but I can't. I'm certain this used to represent the HDMI output. It just doesn't seem to exist any more unless I run sudo alsa-utils restart while boxee is running, when I see it in an error message: * Shutting down ALSA... [ OK ] * Setting up ALSA... * warning: 'alsactl restore' failed with error message 'alsactl: set_control:1388: Cannot write control '2:0:0:IEC958 Playback Default:0' : Operation not permitted'... ...done. When nodm (and thus boxee) aren't running, I don't see this error but alsamixer still doesn't show the IEC channel. aplay -l gives: card 0: NVidia [HDA NVidia], device 0: ALC662 rev1 Analog [ALC662 rev1 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0] Subdevices: 0/1 Subdevice #0: subdevice #0 Its section in lshw reads: *-multimedia description: Audio device product: MCP79 High Definition Audio vendor: nVidia Corporation physical id: 8 bus info: pci@0000:00:08.0 version: b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list configuration: driver=HDA Intel latency=0 maxlatency=5 mingnt=2 resources: irq:22 memory:fae78000-fae7bfff I was running on the stock PAE kernel but now it's running on 2.6.37.1. I upgraded to see if that fixed things; it didn't. I'm considering a reinstall but I hate doing that because a) there's a bit of custom configuration in getting X and Boxee to start on boot and b) I don't know what the problem is. If I reinstall this time, I'll end up doing that every time the sound breaks. I love Ubuntu but I don't want to install it once a month. Is there any way to forcibly reset all alsa settings and restart from scratch (without doing a reinstall)? Any other tips? If you need more information, just ask.

    Read the article

  • Office 2010 Professional Plus (Top 10 reasons to upgrade)

    - by mbcrump
    Being a huge nerd, I decided that I would go ahead and upgrade to the latest and greatest office. That being, Office 2010 Professional Plus. The biggest concern that I had was loosing all my mail settings from Outlook 2007. Thankfully, it upgrade gracefully and worked like a charm. So lets start this top 10 list. 1) You can upgrade without fear of loosing all your stuff! As you can tell by the screenshot below, you can select what you want to do. I selected to remove all previous versions.    2) Outlook conversations: Just like GMail, you can now group emails by conversations. This is simply awesome and a must have. 3) The ability to ignore conversations. If you are on a email thread that has nothing to do with you. Simply “ignore” the conversation and all emails go into the deleted folder. 4) Quick Steps, do you send an email to the same team member or group constantly. With quick steps, its just one click away. 5) Spell check in the Subject line! 6)  Easier Screenshots, built in just click the button. No more ALT-Printscreen for those that are not aware of the awesome SnagIT 10 that's out. 7) Open in protected view. When you open a document from an email attachment, it lets you know the file may be unsafe. You can click a button to enable editing. This is great for preventing macros.       8) Excel has always had a variety of charts and graphs available to visually depict data and trends. With Excel 2010, though, Microsoft has added a new feature called Sparklines, which allows you to place a mini-graph or trend line in a single cell. The Sparklines are a cool way to quickly and simply add a visual element without having to go through the effort of inserting a graph or chart that overwhelms the worksheet. 9) Contact actions. If you hover over a name in the form or fields on an email, you get a popup giving you several actions you can perform on the person such as adding them to your Outlook contacts, scheduling a meeting, viewing their stored contact information if they are already in your contacts, sending an instant message or even starting a telephone call. 10) Windows 7 Task Bar Context Menu – I love the jumplist. I don’t know how much that I would actually use it but it just rocks.

    Read the article

  • Why is my laptop so sluggish? Or Damn You Facebook and Twitter! Or All Hail Chrome!

    - by John Conwell
    In the past three weeks, I've noticed that my laptop (dual core 2.1GHz, 2Gb RAM) has become amazingly sluggish.  I only uses for communications and data lookup workflows, so the slowness was tolerable.  But today I finally got fed up with the suckyness and decided to get to the root of the problem (I do have strong performance roots after all). It actually didn't take all that long to figure it out.  About a year ago I converted to Google Chrome (away from FireFox).  One of the great tools Chrome has is a "Task Manager" tool, that gives you Windows Task Manager like details for all the tabs open in the browser (Shift + Esc).  Since every tab runs in its own process, its easy from Task Manager (both Windows or Chrome) to identify and kill a single performance offending tab.  This is unlike IE, where you only get aggregate data about all tabs open.  Anyway, I digress.  Today my laptop sucked.  Windows Task Manager told me that I had two memory hogging Chrome tabs, but couldn't tell me which web page those tabs are showing.  Enter Chrome Task Manager which tells you the page title, along with CPU, memory and network utilization of each tab.  Enter my amazement.  Turns out Facebook was using just shy of half a Gb of RAM.  Half a Gigabyte!  That's 512 Megabytes!524,288 Kilobytes! 536,870,912 Bytes!  Or 4,294,967,296 Bits!  In other words, that's a frackin boat load of memory.  Now consider that Facebook is running on pretty much 96.3% (statistics based on absolutely nothing) of every house hold desktop, laptop, netbook, and mobile device in America, that is pretty horrific! And I wasn't playing any Facebook games like FarmWars or MafiaVille.  I just had my normal, default home page up showing me who just had breakfast, or just got finished with their morning run. I'm sorry...let me say that again...HALF A GIG OF RAM!  That is just unforgivable. I can just see my mom calling me up:  Mom: "John...I think I need a new computer.  Mine is really slow these days" John: "What do you have running?" Mom: "Oh, just Facebook" John: "Ok, close Facebook and tell me how fast your computer feels" Mom: "Well...I don't know how fast it is.  All I do is use Facebook" John: "Ok Mom, I'll send you a new computer by Tuesday" Oh yea...and the other offending web page?  It was Twitter, using a quarter of a Gigabyte. God I love social networks!

    Read the article

  • Joomla Hide Menu Item, or: Using Rich Content as part of the navigation

    - by chiccodoro
    In my Joomla based web site, I have a two layer main menu. The page layout contains two sections whereas the left one displays the content and the right one displays some other kind of content which at the same time serves as a menu. For example, if the user clicks on the "Products" - "SomeCategory" 2nd level menu item, the left section displays an image. The right section lists all products of that category. Each product is represented by an image and text. The content is scrollable. This section is implemented by means of a custom module (mod_custom) assigned to the menu. The content is rich text (HTML). Each product is entered manually by adding a picture and a text in the WYSIWYG editor, and by inserting a link for the picture and text. Now the issue: When the user clicks on a product, I want to display the corresponding product description article ("SomeProduct") to the left, accounting for the following requirements: The bread crumb now displays "Products - SomeCategory - SomeProduct" The main menu still displays the 2nd level for "Products", and "SomeCategory" is still marked as selected. (I would love if the right section which lists the product would remain in the exact same scroll state, but that's a completely different story.) If I link the product entry from the right hand side directly to the article "SomeProduct", then the article appears to the left, but the breadcrumb and menu are reset. So I wanted to create a hidden menu item "SomeProduct" beneath "SomeCategory", and to link the product entry to that menu item. This way, if I click on the product entry, the article appears to the left, the breadcrumb behaves correctly, and the menu state is preserved. However, it is not possible to configure the SomeProduct menu item as "hidden", therefore it appears in the main menu. I found some resources that suggest to create another menu, called "hidden", which does not use any modules, and to create the "SomeProduct" menu item in that menu. Unfortunately this did not work for me: If I link that menu item from the product entry, and click on that entry, then the article appears to the left, but the menu is reset, and the breadcrumb displays "SomeProduct" instead of "Products SomeCategory SomeProduct". Lucky me! I found an appropriate stackexchange site where I can pour out my heart to you guys. Sure you can help me :-)

    Read the article

  • Oracle JDeveloper 11gR2 Cookbook book review

    - by Chris Muir
    I recently received a free copy of Oracle JDeveloper 11gR2 Cookbook published by Packt Publishing for review. Readers of technical cookbooks would know this genre of text includes problems that developers will hit and the prescribed solutions, in this case for Oracle's Application Development Framework (ADF).  Books like this excel themselves on excellent coverage, a logical progress of solutions through out the book, and providing a readable narrative around the numerous steps and code. This book progresses well through ADF application assembly, ADF Business Components, the view layer, security, deployment and tuning.  Each recipe had a clear introduction and I especially enjoyed the "There's more" follow up sections for some recipes that leads the reader onto related ideas and issues the reader really needs to be aware of. Also worthy of comment having worked with ADF for over 5 years, there certainly was recipes and solutions I hadn't encountered before, this book gets bonus points for that. As a reviewer what negatives can I give this text? The book has cast it's net too wide by trying to cover "everything from design and construction, to deployment, testing, debugging and optimization."  ADF is such a large and sophistication technology, this book with 100 recipes barely scrapes the surface.  Don't expect all your ADF problems to be solved here. In turn there is inconsistency in the level of problems and solutions.  I felt at the beginning the book was pitching itself at advanced problems to solve (that's great for me), but then it introduces topics like building a static View Object or train.  These topics in my opinion are fairly simple and are covered by the Oracle documentation just as well, they shouldn't have been included here.  In conclusion, ADF beginners will find this book worthwhile as it will open your eyes to the wider problems and solutions required for ADF, and experts for just the fact they can point junior programmers at the book for certain problems and say "get on with it". Is there scope for more ADF tombs like this?  Yes!  I'd love to see a cookbook specializing on ADF Business Components (hint hint to budding authors).

    Read the article

  • A "First" at Oracle OpenWorld

    - by Kathryn Perry
    A guest post by Adam May, Director, Fusion CRM, Oracle Applications Development There are always firsts at OpenWorld. These firsts keep the conference fresh and are the reason people come back year after year. An important first this year is our Fusion CRM customers who are using the product and deriving real benefit from Fusion CRM. Everyone can learn from and interact with them -- including us!  We love talking to customers, especially those who are using our solutions in unexpected ways because they challenge us! At previous OpenWorlds, we presented our overall Fusion vision and our plans for Fusion CRM. Those presentations helped customers plan their strategies and map out their new release uptakes. Fast forward to March of this year when the first Fusion CRM customer went live. Since then we've watched the pace of go-lives accelerate every single month. Now we're at the threshold of another OpenWorld -- with over 45,000 attendees, 2,500 sessions and LOTS of other activities. To avoid having our customers curl into a ball with sensory overload, we designed a Focus On Document to outline the most important Fusion CRM activities. Here are some of the highlights: Anthony Lye's "Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap" on Monday at 3:15 p.m. The CRM Pavilion, open in Moscone West from Monday through Wednesday; features our strategic Fusion CRM partners and provides live demonstrations of their capabilities General Session: "Oracle Fusion CRM--Improving Sales Effectiveness, Efficiency, and Ease of Use" on Tuesday at 11:45 a.m.; features Anthony Lye and Deloitte "Meet the Fusion CRM Experts" on Tuesday at 5:00 p.m.; this session gives customers the opportunity to interact one-on-one with Fusion experts divided into eight categories of expertise CRM Social Reception on Tuesday from 6-8 p.m.; there's no better way to spend the early evening than discussing Fusion CRM with Oracle experts and strategic partners over appetizers and drinks Wednesday night is Oracle's Customer Appreciation event; enjoy Pearl Jam, Kings of Leon, etc. beginning at 7:30 p.m. at Treasure Island Be sure to drink plenty of water before sleeping Wednesday night and don't stay out too late because we have lots of great content on Thursday; at the top of the list is "Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement" at 11:15 a.m. We hope you have a fantastic experience at OpenWorld 2012! And here's a little video treat to whet your appetite: http://www.youtube.com/user/FusionAppsAtOracle

    Read the article

  • Gamification in the enterprise updates, September edition

    - by erikanollwebb
    Things have been a little busy here at GamifyOracle.  Last week, I attended a small conference in San Diego on Enterprise Gamification.  Mario Herger of SAP, Matt Landes of Google and I were on a panel discussion about how to introduce and advocate gamification in your organization.  I gave a talk as well as a workshop on gamification.  The workshop was a new concept, to take our Design Jam from Applications User Experience and try it with people outside of user experience.  I have to say, the whole thing was a great success, in great part because I had some expert help from Teena Singh from Apps UX.  We took a flow from expense reporting and created a scenario about sales reps who are on the road a lot and how we needed them to get their expense reports filed by the end of the fiscal year.  We divided the attendees into groups and gave them a little over two hours to work out how they might use game mechanics to gamify the flows.   We even took the opportunity to re-use the app our fab dev team in our Mexico Development Center put together to gamify the event including badges, points, prizes and a leaderboard.  Since I am a firm believer that you can't gamify everything (or at least, not everything well), I focused my talk prior to the workshop on when it works, and when it might not, including pitfalls to gamifying badly.  I was impressed that the teams all considered what might go wrong with gamifying expenses and built into their designs some protections against that.  I can't wait to take this concept on the road again, it really was a fun day. Now that we have gotten through that set of events, we're wildly working on our next project for next week.  I'm doing a focus group at Oracle OpenWorld on Gamification in the Enterprise.  To do that, Andrea Cantu and I are trying to kill as many trees as possible while we work out some gamification concepts to present (see proof below!).  It should be a great event and I'm hoping we learn a lot about what our customers think about the use of gamification in their companies and in the products they use. So that's the news so far from GamifyOracle land.  I'll try to get more out about those events and more after next week. And if you will be at OOW, ping me and we can discuss in person!  I'd love to know what everyone is thinking in the area.

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

  • Wired and Wireless Network Issues with PPPoE

    - by user9054
    down vote favorite Hi Friends, I have got this issue with Ubuntu 10.10. I have been with ubuntu 8.04 and then decided to try out ubuntu 10.10 . I booted with a LiveCD and was able to configure the wireless network painlessly using the livecd. So happily i installed ubuntu 10.10. As soon as ubuntu came up it detected the wireless network and i was able to assign a static IP to eth1 (i dont use DHCP option on my ADSL router) and enter a wap key and use pppoeconf to configure the dialer. The net was on and i was able to surf the net. All hunky dory so far. However on the next boot the fun started. It did not detect the wireless network. I could not see the network manager icon in the systray. I used ifconfig and saw that the entry for eth1 was missing. I used ifup eth1 and it said that eth1 was already up . Then i installed wifi-radar. Wifi-Radar detected the wireless network. I configured wifi-radar for the detected wireless network , set the wap driver as wext and used the manual IP settings. However on clicking connect wifi-radar started looking for a DHCP IP , needless to say it failed. For the love of god i cannot understand why wifi-radar is using DHCP when i have specified manual settings . Next i decided to use the wired network to surf the net looking for a solution . So i plugged in the network cable from my modem , it detected the plugged in connection , i configured eth0 , used pppoeconf and connected to the net. Then i foolishly decided to reboot my PC. And wonders of wonders , the same problem appeared. I cannot see eth0 in my ifconfig anymore. I used pon to start the dsl-provider connection and it said something about network error or something . Now my ifconfig shows only lo , both eth0 and eth1 have disappeared. Can anybody help me on this ? Is it a problem with ipv6 , if so how do you disable ipv6 on ubuntu 10.10 ? OR is this is a known issue with ubuntu 10.10 ? PS : 1) i tried linux mint 10 and had the same issue. On rebooting wireless network was not getting detected . 2) i have made myself the administrator so that there is no issue of rights or anything. Any help is appreciated.

    Read the article

  • Why should I not do a masters degree

    - by aurel
    I have left university on July 2010 where I studied web design (as we all know you learn more by your self but that’s not the issue at the moment). Since then I have not managed to find a job (apart from a one month work experience), from the way things are going, and by taking into account the fact that all my university friends are in the same situation, I don’t think that I am going to find a job soon (within the industry) Now as we all do, even though I don’t have a job, I am still working on personal projects and try to keep up to date (I don’t need a job or uni to do this) – but I am thinking, because there is not work available, would it be worth going back to uni for a master degree? I know I don’t need it and I know that is unlikely that I will learn anything important, as I believe in self learning, and in most cases it is a lot more effective (but I have to say I don’t mind going back to school) The only reason I am thinking of doing the master is, (and this is where I need your help): If it takes me a year to get a job, then on the interview, would the employer think “what the hell did this guy do since he left university” – now if I go to university that would solve this problem. Or I’m I making up a problem that does not exist Plus, I know that employers need examples of sites that I have been working on, at the moment I only have 3 (as when working on personal projects, where their is not time limit I tend drag things in order to get them perfect, and they never get perfect) – so by going back to uni, then this problem maybe solved I said all this as I have read a lot about the fact that you don’t need to have a masters degree to work on web design market (and I totally agree) but considering my concern, the question is should I do a masters course to avoid just spending hours in my room working and learning in my own (but that it would be hard to convince employers that I was really learning in my room) Maybe because I’m still young age 22 not that old anyway :), but I don’t have the “dream” of being rich, so if I were to tell the truth I don’t really care of the fact that I don’t have a job (at the moment), because regardless, I am working on what I love every day, but I know that in the future when I will need the job I may find it harder to get one, if I neglect doing so now Every time I ask a question that I’m not sure about, I keep going on and on, but I really hope you get what I am trying to get across. By the way the course that I am looking at for a masters says that it would teach me how to do these: e-commerce e-government e-science e-learning I don’t know any of them, a part from e-commerce Thanks

    Read the article

  • MS in Computer Science after BE in electronics

    - by Abhinav
    I am doing my 3rd year Bachelors in Electronics and Electrical Communication but from the first year I have been interested in Computer Science. But at that time it was just my hobby. But in second year when I joined robotics my love for computer science rose. I with my team came in top three in 2 National Competition (Technical fests of different IITs) where we used Image Processing, Hardware interfacing etc. But then I realised that Computer Science is not just about coding. I took many lectures from online free schools like Udacity, Coursera in subjects related to Artificial Intelligence, Building a Search Engine, Design and Analysis of Algorithm, Programming a Robotic Car, Programming Languages, Machine Learning, Software Engineering as a Service, WebApps Engineering, Compilers, Applied Crypotography etc. I also did some courses in Core and Advanced Java in my second year from training institute. I will also be taking course in Statistics, Databases, Discrete Mathematics from 25th June. Now I realized how vast is the field of Computer Science and how efficient you become on deciding algorithms and classifying problems into different subfields which have been thoroughly researched so you don't always do brute force thing or naive programming. Now this field has become kind of passion for me. Adding to the fact I am also doing my 6 months internship in software field in Texas Instruments where I am working on Automation and Algorithms. I also have some 5-6 good college level projects in Softwares and Robotics. I also like Electronics but only some fields like Operating System(this subject was there in Electronics also), Micro Processor, Digital, Computer Architecture, DSPs etc. I really want to pursue MS in some field of Computer Science. I am giving GRE in October/November. Till now I have good CG of around 9.4/10 and my 1 year in college is still left. Do I have any chance that some good University in US will consider me for MS in field related to computer science or Robotics. Also Can you suggest somethings that I can do during this 1 year to increase my chances for MS or should I apply for EECS(Electrical Engineering and Computer Science) and then I can shift more towards Computer Science as my major option. My main aim is to do Phd after Ms in CS if I am able to do that somehow. I know that I have to put much extra effort to understand things in MS than CS undergraduates but I will do that with my full dedication, also when I communicate with my college CS students or during my internship period I didn't feel that I am missing very much stuff that they know and was very comfortable during my internship with software employees.

    Read the article

  • Size doesn't matter

    - by ssoolsma
    Whenever I start a new project I *always* break up my code in different projects. Also known as n-tier solution. The scale of  the project doesn't matter, but make sure that each project is responsible for himself (or herself if you prefer). I make sure that i ....At least thought about how the project should work on the toilet or in a project team meeting.Have a solution directory and create my projects within. I like to name my project (and it's folders by the namespaces). For instance: When i'm creating a piece of (web)software called: ChuckNorris, i always include the software name in my projects. Start off with designing the DataAccess project. I name it: ChuckNorris.DataAccess which lets me easily identify the project incase the project scales alot.Build the classes which represent the database structure. Don't stop working on a class untill it's finished for now. Also, don't over-do the methods. Build stuff only when it's needed, and not think: "Hm, that would be cool to have". Cause most of the time you end up with unused code, and we don't want that.Build a unittest project and make sure you create the folder inside the project that it's testing. So, create the ChuckNorris.DataAccess.UnitTest project inside the folder of the dataaccess project. I would suggest using the nUnit testframework.Incase you though, hm i skip unittest: Don't! Just build it - it will safe you alot of time later onNow, read 5 again. Build that bloody unittest. Don't skip. (i cant emphasize this enough)Now, every class in the dataaccess project is responsible for itself. They don't rely on each other. This is where we use the BusinessLogic project for. Start creating the ChuckNorris.BusinessLogic project. (not inside the data-access project ofcourse, but withing the ChuckNorris folder.Combine stuff from data-access. This usual involves alot of copying the data-access classes and feels silly at first. (we'll get to that later on)Now you come up to a point of creating a service project. You might not always see why to use it, but see it as a way to expose your businesslogic to any application (including your own). Sometimes i use it as a so-called "Factory". Every call goes through this factory, so that's the only thing i'm exposing to any program, and make sure that those methods are the only ones that I allow you to invoke.Build any UI (website, phoneapp, forms application, silverlight, wpf or whatever) and reference it to you service project. Fall in love (cough) with this approach.It's possible that it doesn't seem to make much sense, and very incomplete. Well, that last part is correct. Next post will go in to detail of setting up your Data-Access project and use the entity framework.

    Read the article

  • Reporting on common code smells : A POC

    - by Dave Ballantyne
    Over the past few blog entries, I’ve been looking at parsing TSQL scripts in a variety of ways for a variety of tasks.  In my last entry ‘How to prevent ‘Select *’ : The elegant way’, I looked at parsing SQL to report upon uses of SELECT *.  The obvious question leading on from this is, “Great, what about other code smells ?”  Well, using the language service parser to do that was turning out to be a bit of a hard job,  sure I was getting tokens but no real context.  I wasn't even being told when an end of statement had been reached. One of the other parsing options available from Microsoft is exposed in the assembly ‘Microsoft.SqlServer.TransactSql.ScriptDom’,  this is ,I believe, installed with the client development tools with SQLServer.  It is much more feature rich than the original parser I had used and breaks a TSQL script into intuitive classes for analysis. So, what sort of smells can I now find using it ?  Well, for an opening gambit quite a nice little list. Use of NOLOCK Set of READ UNCOMMITTED Use of SELECT * Insert without column references Explicit datatype conversion on Sargs Cross server selects Non use of two-part naming convention Table and Query hint usage Changes in set options Use of single line comments Use of ordinal column positions in ORDER BY clause Now, lets not argue the point that “It depends” as smells on some of these, but as an academic exercise it is quite interesting.  The code is available from this link :https://www.dropbox.com/s/rfk32sou4fzl2cw/TSQLDomTest.zip  All the usual disclaimers apply to this code, I cannot be held responsible for anything ranging from mild annoyance through to universe destruction due to the use of this code or examples. The zip file contains a powershell script and my test cases.  The assembly used requires .Net 4 to run, which means that you will need powershell 3 ( though im running through PowerGUI and all works ok ) .  The code searches for all .sql files in the folder hierarchy for the workingpath,  you can override this if you want by simply changing the $Folder variable, and processes each in turn for the smells.  Feedback is not great at the moment, all it does is output to an xml file (Smells.xml) the offset position and a description of the smell found. Right now, I am interested in your feedback.  What do you think ?  Is this (or should it be) more than an academic exercise ?  Can tooling such as this be used as some form of code quality measure ?  Does it Work ? Do you have a case listed above which is not being reported ? Do you have a case that you would love to be reported ? Let me know , please mailto: [email protected]. Thanks

    Read the article

  • What Counts for a DBA: Passion

    - by drsql
    One of my first questions, when interviewing for a DBA/Programmer position, is always: “Why do you want this job?” The answers I receive range from cheesy hyperbole (“I want to enhance your services with my vast knowledge”) to deadpan realism (“I have N kids who all have a hole in the front of their face where food goes"). Both answers are fine in their own way, at least displaying some self-confidence, humour and honesty, but once in a while, I'll hear the answer that is music to me ears... “I LOVE DATABASES!” Whenever I hear it, my nerves tingle in hopeful anticipation; have I found someone for whom working with database isn't just a job, but a passion? Inevitably, I'm often disappointed. What initially seemed like passion turns out to be rather shallow enthusiasm; the person is enthusiastic about working with databases in the same way he or she might be about eating a bag of Cajun spiced kettle chips; enjoyable, but not something to think about too deeply or take too seriously. Enthusiasm comes, and enthusiasm goes. I've seen countless technical forum users burst onto the scene in a blaze of frantic question-answering, only to fade away within days, never to be heard from again. Passion, however, is more of a longstanding commitment. The biographies of the great technologists and authors of the recent past are full of the sort of passion and engrossment that lead a person to write a novel non-stop for a fortnight with no sleep and only dog food to eat (Philip K. Dick), or refuse to leave the works of the first tunnel under the Thames, even though it was flooded (Brunel). In a similar (though more modest) way, my passion for working with databases has led me to acts that might cause someone for whom it was "just a job" to roll their eyes in disbelief. Most evenings you're more likely to find me reading a database book than watching TV. I've spent hundreds of hours of my spare time writing blogs and articles (some of which are only read by tens of people); I've spent hundreds of dollars travelling to conferences, paying my own flight and hotel expenses, so that I can share a little of what I know, and mix with some like-minded people. And I know I'm far from alone in this, in the SQL Server community. Passion isn't everything, of course, and it isn't always accompanied by any great skill, but in almost every case, that skill can be cultivated over time. If you are doing what you are passionate about, work turns into more than just a way to feed your kids; it becomes your hobby, entertainment, and preoccupation. And it is this passion that gives a DBA the obsessive stubbornness, the refusal to be beaten by even the most difficult problem, which is often so crucial. A final word of warning though: passion without limits can turn weird. Never let it get in the way of your wife, kids, bills, or personal hygiene.

    Read the article

  • What to use C++ for?

    - by futlib
    I really love C++. However, I'm struggling to find good uses for it lately. It is still the language to use if you're building huge systems with huge performance requirements. Like backend/infrastructure code at Google and Facebook, or high-end games. But I don't get to do stuff like that. It's also a good choice for code that runs close to the hardware. I'd like to do more low-level stuff, but it isn't part of my job, and I can't think of useful private projects that would involve that. Traditionally, C++ was also a good choice for rich client applications, but those are mostly written in C# and Obj-C lately - and aren't really that important anymore, with everything being a web app. Or a mobile app, which are mostly written in Obj-C and Java. And of course, web-based desktop and mobile apps are quite prominent, too. At my job, I work mostly on web applications, using Java, JavaScript and Groovy. Java is a good/popular choice for non-Google-scale backends, Groovy (or Python, or Ruby or Node.js) is pretty good for the server-side of web apps and JavaScript is the only real choice for the client-side. Even the little games I'm writing in my spare time are lately mostly written in JavaScript, so they can run in the browser. So what would you suggest I could use C++ for? I'm aware that this question is very similar. However, I don't want to learn C++, I was a professional C++ programmer for years. I want to keep doing it and find good new use cases for it. I know that I can use C++ for web apps/games. I could even compile C++ to JavaScript with Emscripten. However, it doesn't seem like a good idea. I'm looking for something C++ is really good at to stay competent in the language. If your answer is: Just give up and forget C++, you'll probably never need it again, so be it.

    Read the article

  • Oracle Customer Success Forum - Batesville - Oracle Sales Cloud - June 24th, 5pm CET

    - by Richard Lefebvre
    Batesville uses Oracle Sales Cloud to create a common platform and standardize processes for business transformation across field sales and telesales. Using real-time KPI dashboards, they are measuring their business success with consistency across their sales reps.We are pleased to invite you to a discussion with Batesville on industry trends, why sales automation is important, reasons for choosing Oracle Sales Cloud, and the vendor evaluation process. Please click on the register button to confirm your attendance by 5:00 p.m. Pacific Time on June 23, 2014.Speakers: Diane Kinker, Director CRM Program Chris Haven, Senior Director Product Management, Oracle (Moderator) Organization Profile:Batesville (www.Batesville.com), a wholly owned subsidiary of Hillenbrand, Inc. (NYSE:HI), is the leader in the North American death care industry. For more than 125 years, Batesville has been dedicated to helping families honor the lives of those they love®. Batesville’s innovation has changed the face of funeral service, from advancements in manufacturing and quality to patented features and memorialization offerings, technology and web-based solutions, and profit-enhancing merchandising systems and room displays. Our history of manufacturing excellence, product innovation, superior customer service and reliable delivery has helped Batesville become – and remain – a market leader. Event Description:In this informal reference call, you will have the opportunity to hear Batesville discuss industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and the vendor evaluation process. The call will open with a brief overview, followed by discussion, and an open question and answer session. Please allow one hour for the call.Why Oracle:Batesville looked to transform its sales automation processes. Oracle Sales Cloud met these needs and Batesville’s requirements for: Standardized end-to-end Sales Processes including Sales Performance Management (territory management, quota management and incentive compensation) Mobile capabilities with integration to Microsoft Outlook and Smartphones Creation of the WIG Dashboard (Wildly Important Goal) using reporting and analytics Click the Register Now button to confirm your attendance for this informative event. Registration will close at 5:00 p.m. Pacific Time on June 23, 2014.After you register your information will be forwarded through an Approval Process. Once your registration request has been validated against the invitation database, you will receive an email confirmation with your registration details as long as there is availability. Please be advised that Batesville will revise the registrants list and may dismiss registrations as they see fit. Register Now!

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Traditional POS is Dead

    - by David Dorf
    Traditional POS is dead -- I've heard that one before. Here's an excerpt from Joe Skorupa's blog over at RIS where he relayed ten trends that were presented at NRF. 7. Mobile POS signals death of traditional POS. Shoppers don't love self-checkout, but they prefer it to long queues or dealing with associates. Fixed POS is expensive and bulky. Mobile POS frees floor space for other purposes and converts associates from being cashiers to being sales assistants that provide new levels of customer service and incremental basket sales. In addition to unplugging the POS, new alternatives are starting to take hold - thin client, POS as a service, and replacing POS software with e-commerce platforms. I'll grant that in some situations for some retailers there might be an opportunity to to ditch the traditional POS, but for the majority of retailers that's just not practical. Take it from a guy that had to wake up at 3am after every Thanksgiving to monitor POS systems across the US on Black Friday. If a retailer's website goes down on Black Friday, they will take a significant hit. If a retailer's chain-wide POS system goes down on Black Friday, that retailer will cease to exist. Mobile POS works great for Apple because the majority of purchases are one or two big-ticket items that don't involve cash. There's still a traditional POS in every store to fall back on (its just hidden). Try this at home: Choose your favorite e-commerce site and add an item to the cart while timing how long it takes. Now multiply that by 15 to represent the 15 items you might buy at store like Target. The user interface isn't optimized for bulk purchases, and that's how it should be. The webstore and POS are designed for different purposes. Self-checkout is a great addition to POS and so is mobile checkout. But they add capabilities to POS, not replace it. Centralized architectures, even those based in the cloud, are quite viable as long as there's resiliency in the registers. You cannot assume perfect access to the network, so a POS must always be able to sell regardless of connectivity. Clearly the different selling channels should be sharing common functionality. Things like calculating tax, accepting coupons, and processing electronic payments can be shared, usually through a service-oriented architecture. This lowers costs and providers greater consistency, both of which help retailers. On paper these technologies look really good and we should continue to push boundaries, but I'm not ready to call the patient dead just yet.

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • The Softer Side of Customer Experience

    - by Christina McKeon
    It’s election season in the U.S., and you know what that means. It means I stop by the recycling bin in my garage before entering the house with the contents of my mailbox. A couple of weeks ago, I was doing my usual direct mail purge when I came across a piece from The Container Store®. This piece would have gone straight to the recycling bin, but the title stopped me: Learn what WE STAND FOR! Under full disclaimer, I’m probably a “frequent flier” at The Container Store. One can never be too organized! Now, back to the direct mail piece. I opened it to discover that The Container Store has taken their customer experience beyond “a shopping experience that makes you smile” to giving customers more insight and transparency into how they feel about their employees, the vendors they partner with, and the communities they live in. The direct mail piece included several employees showcasing a skill, hobby or talent with their photo and a personal note that used one word to describe what these employees believe The Container Store stands for. I do not recall the last time I read through an entire piece of direct mail. But this time, I pored over all the comments and photos.  Summer, a salesperson, believes that one word is PASSION. Thomas in distribution center inventory systems chooses the word ACTION. The list goes on to include MATCHLESS, FUN, FAMILY, LOVE, and EMPOWERMENT. The Container Store is running a contest asking you to tell them what nonprofit organization you stand for. Anyone can submit their favorite nonprofit to win cash, products and services from The Container Store. Don’t forget about the softer side of customer experience. With many organizations working feverishly to transform their business into being more customer-centric, it’s easy to get caught up in processes and technology. Focusing on people and social responsibility often falls behind and becomes a lower priority. Keeping people and social responsibility at the forefront is crucial. Your customers will use your processes and technology, but they will see or hear your people and feel their passion. The latter is what they will remember most about your brand. I’m sure there are many other great examples of the softer side of customer experience. Please share your examples in the comments section.

    Read the article

  • Data Model Dissonance

    - by Tony Davis
    So often at the start of the development of database applications, there is a premature rush to the keyboard. Unless, before we get there, we’ve mapped out and agreed the three data models, the Conceptual, the Logical and the Physical, then the inevitable refactoring will dog development work. It pays to get the data models sorted out up-front, however ‘agile’ you profess to be. The hardest model to get right, the most misunderstood, and the one most neglected by the various modeling tools, is the conceptual data model, and yet it is critical to all that follows. The conceptual model distils what the business understands about itself, and the way it operates. It represents the business rules that govern the required data, its constraints and its properties. The conceptual model uses the terminology of the business and defines the most important entities and their inter-relationships. Don’t assume that the organization’s understanding of these business rules is consistent or accurate. Too often, one department has a subtly different understanding of what an entity means and what it stores, from another. If our conceptual data model fails to resolve such inconsistencies, it will reduce data quality. If we don’t collect and measure the raw data in a consistent way across the whole business, how can we hope to perform meaningful aggregation? The conceptual data model has more to do with business than technology, and as such, developers often regard it as a worthy but rather arcane ceremony like saluting the flag or only eating fish on Friday. However, the consequences of getting it wrong have a direct and painful impact on many aspects of the project. If you adopt a silo-based (a.k.a. Domain driven) approach to development), you are still likely to suffer by starting with an incomplete knowledge of the domain. Even when you have surmounted these problems so that the data entities accurately reflect the business domain that the application represents, there are likely to be dire consequences from abandoning the goal of a shared, enterprise-wide understanding of the business. In reading this, you may recall experiences of the consequence of getting the conceptual data model wrong. I believe that Phil Factor, for example, witnessed the abandonment of a multi-million dollar banking project due to an inadequate conceptual analysis of how the bank defined a ‘customer’. We’d love to hear of any examples you know of development projects poleaxed by errors in the conceptual data model. Cheers, Tony

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >