Search Results

Search found 20021 results on 801 pages for 'software engeneering learner'.

Page 324/801 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Kleo Bare Metal Backups review

    <b>Linux User and Developer:</b> "Kleo Bare Metal Backups is a freely distributed product from Carroll-Net, Inc (http://carroll.net), a company that has been in the business of protecting and retrieving data for over 15 years. This experience shows in the design of the software."

    Read the article

  • Wireless on Inspiron 1501 not working on Ubuntu 12.04 LTS

    - by Jeek C
    As titled, I have Ubuntu 12.04 LTS on an Inspiron 1501, and the wireless has refused to work thus far. Below is what I have tried: Activating the Broadcom driver via System Settings Additional Driver Using the below commands sudo apt-get remove bcmwl-kernel-source sudo apt-get install firmware-b43-installer installing the cutter software Sadly, none of them can get the wireless working. Is there anything else I can try?

    Read the article

  • What's the best book for coding conventions?

    - by Joschua
    What's the best book about coding conventions (and perhaps design patterns), that you highly recommend (at best code samples in Python, C++ or Java)? It would be good, if the book (or just another) also covers the topics project management and agile software development if appropriate (for example how projects fail through spaghetti code). I will accept the answer with the book(s) (maximum two books per answer, please), that looks the most interesting, because the reading might take a while :)

    Read the article

  • Why do some large companies use a different domain for sending emails?

    - by Andrei Rinea
    I've received notifications and newsletters from Microsoft and Facebook in the past and noticed that both emails came not from an address such as [email protected] or [email protected]. Not event [email protected] but both had different domains such as : [email protected] and [email protected] Why is this? Any particular advantage in doing so? Other than not polluting the employees email software, I can't see.

    Read the article

  • Clustering for Mere Mortals (Pt3)

    - by Geoff N. Hiten
    The Controller Now we get to the meat of the matter.  You want a virtual cluster, the first thing you have to do is create your own portable domain.  IStart with a plain vanilla install of Windows 2003 R2 Standard on a semi-default VM. (1 GB RAM, 2 cores, 2 NICs, 128GB dynamically expanding VHD file).  I chose this because it had the smallest disk and memory footprint of any current supported Microsoft Server product.  I created the VM with a single dynamically expanding VHD, one fixed 16 GB VHD, and two NICs.  One NIC is connected to the outside world and the other one is part of an internal-only network.  The first NIC is set up as a DHCP client.  We will get to the other one later. I actually tried this with Windows 2008 R2, but it failed miserably.  Not sure whether it was 2008 R2 or the fact I tried to use cloned VMs in the cluster.  Clustering is one place where NewSID would really come in handy.  Too bad Microsoft bought and buried it. Load and Patch the OS (hence the need for the outside connection).This is a good time to go get dinner.  Maybe a movie too.  There are close to a hundred patches that need to be downloaded and applied.  Avoiding that mess was why I put so much time into trying to get the 2008 R2 version working.  Maybe next time.  Don’t forget to add the extensions for VMLite (or whatever virtualization product you prefer). Set a fixed IP address on the internal-only NIC.  Do not give it a gateway.  Put the same IP address for the NIC and for the DNS Server.  This IP should be in a range that is never available on your public network.  You will need all the addresses in the range available.  See the previous post for the exact settings I used. I chose 10.97.230.1 as the server.  The rest of the 10.97.230 range is what I will use later.  For the curious, those numbers are based on elements of my home address.  Not truly random, but good enough for this project. Do not bridge the network connections.  I never allowed the cluster nodes direct access to any public network. Format the fixed VHD and leave it alone for now. Promote the VM to a Domain Controller.  If you have never done this, don’t worry.  The only meaningful decision is what to call the new domain.  I prefer a bogus name that does not correspond to a real Top-Level Domain (TLD).  .com, .biz., .net, .org  are all TLDs that we know and love.  I chose .test as the TLD since it is descriptive AND it does not exist in the real world.  The domain is called MicroAD.  This gives me MicroAD.Test as my domain. During the promotion process, you will be prompted to install DNS as part of the Domain creation process.  You want to accept this option.  The installer will automatically assign this DNS server as the authoritative owner of the MicroAD.test DNS domain (not to be confused with the MicroAD.test Active Directory domain.) For the rest of the DCPROMO process, just accept the defaults. Now let’s make our IP address management easy.  Add the DHCP Role to the server.  Add the server (10.97.230.1 in this case) as the default gateway to assign to DHCP clients.  Here is where you have to be VERY careful and bind it ONLY to the Internal NIC.  Trust me, your network admin will NOT like an extra DHCP server “helping” out on her network.  Go ahead and create a range of 10-20 IP Addresses in your scope.  You might find other uses for a pocket domain controller <cough> Mirroring </cough> than just for building a cluster.  And Clustering in SQL 2008 and Windows 2008 R2 fully supports DHCP addresses. Now we have three of the five key roles ready.  Two more to go. Next comes file sharing.  Since your cluster node VMs will not have access to any outside, you have to have some way to get files into these VMs.  I simply go to the root of C: and create a “Shared” folder.  I then share it out and grant full control to “Everyone” to both the share and to the underlying NTFS folder.   This will be immensely useful for Service Packs, demo databases, and any other software that isn’t packaged as an ISO that we can mount to the VM. Finally we need to create a block-level multi-connect storage device.  The kind folks at Starwinds Software (http://www.starwindsoftware.com/) graciously gave me a non-expiring demo license for expressly this purpose.  Their iSCSI SAN software lets you create an iSCSI target from nearly any storage medium.  Refreshingly, their product does exactly what they say it does.  Thanks. Remember that 16 GB VHD file?  That is where we are going to carve into our LUNs.  I created an iSCSI folder off the root, just so I can keep everything organized.  I then carved 5 ea. 2 GB iSCSI targets from that folder.  I chose a fixed VHD for performance.  I tried this earlier with a dynamically expanding VHD, but too many layers of abstraction and sparseness combined to make it unusable even for a demo.  Stick with a fixed VHD so there is a one-to-one mapping between abstract and physical storage.  If you read the previous post, you know what I named these iSCSI LUNs and why.  Yes, I do have some left over space.  Always leave yourself room for future growth or options. This gets us up to where we can actually build the nodes and install SQL.  As with most clusters, the real work happens long before the individual nodes get installed and configured.  At least it does if you want the cluster to be a true high-availability platform.

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Oracle Utilities Customer Care And Billing Supported Platforms

    - by Anthony Shorten
    An updated list of the supported platforms (for all tiers) for Oracle Utilities Customer Care And Billing V2.1.x, V2.2.x and V2.3.x is now available from My Oracle Support KB Id: 1123876.1. Please refer to this document and article for any clarification on specific platforms and related software supported for the above versions of Oracle Utilities Customer Care And Billing.

    Read the article

  • A Good Developer is So Hard to Find

    - by James Michael Hare
    Let me start out by saying I want to damn the writers of the Toughest Developer Puzzle Ever – 2. It is eating every last shred of my free time! But as I've been churning through each puzzle and marvelling at the brain teasers and trivia within, I began to think about interviewing developers and why it seems to be so hard to find good ones.  The problem is, it seems like no matter how hard we try to find the perfect way to separate the chaff from the wheat, inevitably someone will get hired who falls far short of expectations or someone will get passed over for missing a piece of trivia or a tricky brain teaser that could have been an excellent team member.   In shops that are primarily software-producing businesses or other heavily IT-oriented businesses (Microsoft, Amazon, etc) there often exists a much tighter bond between HR and the hiring development staff because development is their life-blood. Unfortunately, many of us work in places where IT is viewed as a cost or just a means to an end. In these shops, too often, HR and development staff may work against each other due to differences in opinion as to what a good developer is or what one is worth.  It seems that if you ask two different people what makes a good developer, often you will get three different opinions.   With the exception of those shops that are purely development-centric (you guys have it much easier!), most other shops have management who have very little knowledge about the development process.  Their view can often be that development is simply a skill that one learns and then once aquired, that developer can produce widgets as good as the next like workers on an assembly-line floor.  On the other side, you have many developers that feel that software development is an art unto itself and that the ability to create the most pure design or know the most obscure of keywords or write the shortest-possible obfuscated piece of code is a good coder.  So is it a skill?  An Art?  Or something entirely in between?   Saying that software is merely a skill and one just needs to learn the syntax and tools would be akin to saying anyone who knows English and can use Word can write a 300 page book that is accurate, meaningful, and stays true to the point.  This just isn't so.  It takes more than mere skill to take words and form a sentence, join those sentences into paragraphs, and those paragraphs into a document.  I've interviewed candidates who could answer obscure syntax and keyword questions and once they were hired could not code effectively at all.  So development must be more than a skill.   But on the other end, we have art.  Is development an art?  Is our end result to produce art?  I can marvel at a piece of code -- see it as concise and beautiful -- and yet that code most perform some stated function with accuracy and efficiency and maintainability.  None of these three things have anything to do with art, per se.  Art is beauty for its own sake and is a wonderful thing.  But if you apply that same though to development it just doesn't hold.  I've had developers tell me that all that matters is the end result and how you code it is entirely part of the art and I couldn't disagree more.  Yes, the end result, the accuracy, is the prime criteria to be met.  But if code is not maintainable and efficient, it would be just as useless as a beautiful car that breaks down once a week or that gets 2 miles to the gallon.  Yes, it may work in that it moves you from point A to point B and is pretty as hell, but if it can't be maintained or is not efficient, it's not a good solution.  So development must be something less than art.   In the end, I think I feel like development is a matter of craftsmanship.  We use our tools and we use our skills and set about to construct something that satisfies a purpose and yet is also elegant and efficient.  There is skill involved, and there is an art, but really it boils down to being able to craft code.  Crafting code is far more than writing code.  Anyone can write code if they know the syntax, but so few people can actually craft code that solves a purpose and craft it well.  So this is what I want to find, I want to find code craftsman!  But how?   I used to ask coding-trivia questions a long time ago and many people still fall back on this.  The thought is that if you ask the candidate some piece of coding trivia and they know the answer it must follow that they can craft good code.  For example:   What C++ keyword can be applied to a class/struct field to allow it to be changed even from a const-instance of that class/struct?  (answer: mutable)   So what do we prove if a candidate can answer this?  Only that they know what mutable means.  One would hope that this would infer that they'd know how to use it, and more importantly when and if it should ever be used!  But it rarely does!  The problem with triva questions is that you will either: Approve a really good developer who knows what some obscure keyword is (good) Reject a really good developer who never needed to use that keyword or is too inexperienced to know how to use it (bad) Approve a really bad developer who googled "C++ Interview Questions" and studied like hell but can't craft (very bad) Many HR departments love these kind of tests because they are short and easy to defend if a legal issue arrises on hiring decisions.  After all it's easy to say a person wasn't hired because they scored 30 out of 100 on some trivia test.  But unfortunately, you've eliminated a large part of your potential developer pool and possibly hired a few duds.  There are times I've hired candidates who knew every trivia question I could throw out them and couldn't craft.  And then there are times I've interviewed candidates who failed all my trivia but who I took a chance on who were my best finds ever.    So if not trivia, then what?  Brain teasers?  The thought is, these type of questions measure the thinking power of a candidate.  The problem is, once again, you will either: Approve a good candidate who has never heard the problem and can solve it (good) Reject a good candidate who just happens not to see the "catch" because they're nervous or it may be really obscure (bad) Approve a candidate who has studied enough interview brain teasers (once again, you can google em) to recognize the "catch" or knows the answer already (bad). Once again, you're eliminating good candidates and possibly accepting bad candidates.  In these cases, I think testing someone with brain teasers only tests their ability to answer brain teasers, not the ability to craft code. So how do we measure someone's ability to craft code?  Here's a novel idea: have them code!  Give them a computer and a compiler, or a whiteboard and a pen, or paper and pencil and have them construct a piece of code.  It just makes sense that if we're going to hire someone to code we should actually watch them code.  When they're done, we can judge them on several criteria: Correctness - does the candidate's solution accurately solve the problem proposed? Accuracy - is the candidate's solution reasonably syntactically correct? Efficiency - did the candidate write or use the more efficient data structures or algorithms for the job? Maintainability - was the candidate's code free of obfuscation and clever tricks that diminish readability? Persona - are they eager and willing or aloof and egotistical?  Will they work well within your team? It may sound simple, or it may sound crazy, but when I'm looking to hire a developer, I want to see them actually develop well-crafted code.

    Read the article

  • Mesa library vs Hardware accelerated OpenGL for my executable - it's just a linking problem?

    - by user827992
    Supposing that i have my program that is targeting a specific OpenGL version, let's say the 3.0, now i want to produce an executable that will support the software rendering with Mesa and another executable that will support the Hardware accelerated context, i can use the same source code for both without expecting any issues ? In another words, the instrunctions in this libraries are the same for my linking purpose ?

    Read the article

  • Geographically limited / gradual release process

    - by daniel.sedlacek
    I am looking for more information on a gradual release process - that is when you release new version of a software only to certain set of end users, mostly geographically limited (or limited by a reach of particular server). Google seems to be blind to this term - that indicates that's not how it's called. What's the name then? EDIT: An example of what I mean is when Facebook rolled out new image galleries they were first visible to certain users only, then to whole US and then to the rest of the world.

    Read the article

  • Microsoft Fights Back Against Zeus Malware Ring

    According to a press release from Microsoft, the software giant, along with its partners, solicited the help of the U.S. Marshals on March 23 to seize Zeus command-and-control servers in charge of delivering malware updates, issuing commands, and stealing data in Lombard, Illinois, and Scranton, Pennsylvania. The active servers were seized on the premises of the two hosting companies before their owners could attempt to destroy the evidence. Microsoft was allowed to overtake 800 domains used by the Zeus servers and two IP addresses used to advance the operation were also dismantled. Microso...

    Read the article

  • Adobe Reader crashes immediately after starting

    - by Tanveer Hossain
    I'm running ubuntu 12.04 32 bit system. I installed adobe reader through software center but when click on the icon to start acroread it immediately crashes during showing splash window. I've also tried using terminal running command "acroread" but no gain. It even doesn't show any error massage. It should be noted that to solve the problem i've installed lsb module and ia32-libs. But my problem is not solved.

    Read the article

  • We have completed our 100th recording!

    - by van
    Well we did it.  We made our 100th recording.  It also had a record breaking attendance of over 100 attendees. So check it out, our 100th recording on Software Craftsmanship with Robert Martin. Thanks for everyone's help and support over the last few years. Zachariah Young http://virtualaltnet.com

    Read the article

  • ODI 12c - Installing ODI Studio

    - by David Allan
    Today the 12c release of the Oracle Data Integrator was made GA on OTN. Once you have downloaded and are running the installer, if you want to install the ODI Studio, ensure you select 'Enterprise Installation' as this is where the ODI Studio for 12.1.2 can be installed from. If you choose 'Standalone Installation' you will be hunting for the ODI studio software. So ensure you pick Enterprise Installation to get the ODI design studio. Once that's done you are ready to go!

    Read the article

  • Partner Webcast – More out of ODA with DB Options - 19 July 2012

    - by Thanos
    The Simple, Reliable, Affordable Path to High-Availability Databases Critical business data needs to be available 24/7 for users and customers, but it can be a struggle to find the time and resources to build a highly available database system that’s reliable and affordable. That’s why Oracle created the new Oracle Database Appliance—a complete package of software, server, storage, and networking. The Oracle Database Appliance integrates the world’s most popular database - Oracle Database 11g  - with system software, servers, storage and networking in a single box. Business gets the benefit of a reliable, secure and highly available database to support applications and maintain continuity – as well as groundbreaking ease of use. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. The benefits?   Unmatched performance, reliability & security for your data that’s there when you need it – which is all the time. Fast installation, simple deployment, easy management. Out of the box. Significant cost savings & reduced risk and complexity compared to integrating all the elements yourself. Ongoing lower total cost of ownership with multiple automated support, detection & correction functions that also save you time.   Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Does using GCC specific builtins qualify as incorporation within a project?

    - by DavidJFelix
    I understand that linking to a program licensed under the GPL requires that you release the source of your program under the GPL as well, while the LGPL does not require this. The terminology of the (L)GPL is very clear about this. #include "gpl_program.h" means you'd have to license GPL, because you're linking to GPL licensed code. And #include "lgpl_program.h" means you're free to license however you want, so that it doesn't explicitly prohibit linking to LGPL source. Now, my question about what isn't clear is: [begin question] GCC is GPL licensed, compiling with GCC, does not constitute "integration" into your program, as the GPL puts it; does using builtin functions (which are specific to GCC) constitute "incorporation" even though you haven't explicitly linked to this GPL licensed code? My intuition tells me that this isn't the intention, but legality isn't always intuitive. I'm not actually worried, but I'm curious if this could be considered the case. [end question] [begin aside] The reason for my equivocation is that GCC builtins like __builtin_clzl() or __builtin_expect() are an API technically and could be implemented in another way. For example, many builtins were replicated by LLVM and the argument could be made that it's not implementation specific to GCC. However, many builtins have no parallel and when compiled will link GPL licensed code in GCC and will not compile on other compilers. If you make the argument here that the API could be replicated by another compiler, couldn't you make that identical claim about any program you link to, so long as you don't distribute that source? I understand that I'm being a legal snake about this, but it strikes me as odd that the GPL isn't more specific. I don't see this as a reasonable ploy for proprietary software creators to bypass the GPL, as they'd have to bundle the GPL software to make it work, removing their plausible deniability. However, isn't it possible that if builtins don't constitute linking, then open source proponents who oppose the GPL could simply write a BSD/MIT/Apache/Apple licensed product that links to a GPL'd program and claim that they intend to write a non-GPL interface that is identical to the GPL one, preserving their BSD license until it's actually compiled? [end aside] Sorry for the aside, I didn't think many people would follow why I care about this if I'm not facing any legal trouble or implications. Don't worry too much about the hypotheticals there, I'm just extrapolating what either answer to my actual question could imply.

    Read the article

  • Atlanta Laptop Repair

    Is your laptop running slow or crashing? To the average person laptop problems can be confusing. You do not know whether it';s hardware or a software issue that is causing the problem. An experienced ... [Author: Steven Yaniz - Computers and Internet - March 30, 2010]

    Read the article

  • Yellow Dog Enterprise Linux for GPU computing

    <b>The H Open:</b> "The Japanese Fixstars Corporation, which specialises in software for the Cell processors, has announced the release of Yellow Dog Enterprise Linux (YDEL) 6.2 for CUDA, the first enterprise Linux OS optimised for GPU computing."

    Read the article

  • Good resource for business development Techniques

    - by Morons
    I work for an IT consulting firm… As I progress in my career I (like most who work for IT firms) am spending more and more time participating in business development, usually as a technical expert. Can any one recommend a good resource (or book) on business development preferably targeting technology businesses? (I am NOT looking for “how to get leads”… I’m looking for “how to conduct a solid sales pitch\ Demo Software” type stuff)

    Read the article

  • Can Microsoft Build Appliances?

    - by andrewbrust
    Billy Hollis, my Visual Studio Live! colleague and fellow Microsoft Regional Director said recently, and I am paraphrasing, that the computing world, especially on the consumer side, has shifted from one of building hardware and software that makes things possible to do, to building products and technologies that make things easy to do.  Billy crystalized things perfectly, as he often does. In this new world of “easy to do,” Apple has done very well and Microsoft has struggled.  In the old world, customers wanted a Swiss Army Knife, with the most gimmicks and gadgets possible.  In the new world, people want elegantly cutlery.  They may want cake cutters and utility knives too, but they don’t want one device that works for all three tasks.  People don’t want tools, they want utensils.  People don’t want machines.  They want appliances. Microsoft Appliances: They Do Exist Microsoft has built a few appliance-like devices.  I would say XBox 360 is an appliance,  It’s versatile, mind you, but it’s the kind of thing you plug in, turn on and use, as opposed to set-up, tune, and open up to upgrade the internals.  Windows Phone 7 is an appliance too.  It’s a true smartphone, unlike Windows Mobile which was a handheld computer with a radio stack.  Zune is an appliance too, and a nice one.  It hasn’t attained much traction in the market, but that’s probably because the seminal consumer computing appliance -- the iPod – got there so much more quickly. In the embedded world, Mediaroom, Microsoft’s set-top product for the cable industry (used by AT&T U-Verse and others) is an appliance.  So is Microsoft’s Sync technology, used in Ford automobiles.  Even on the enterprise side, Microsoft has an appliance: SQL Server Parallel Data Warehouse Edition (PDW) combines Microsoft software with select OEMs’ server, networking and storage hardware.  You buy the appliance units from the OEMs, plug them in, connect them and go. I would even say that Bing is an appliance.  Not in the hardware sense, mind you.  But from the software perspective, it’s a single-purpose product that you visit or run, use and then move on.  You don’t have to install it (except the iOS and Android native apps where it’s pretty straightforward), you don’t have to customize it, you don’t have to program it.  Basically, you just use it. Microsoft Appliances that Should Exist But Microsoft builds a bunch of things that are not appliances.  Media Center is not an appliance, and it most certainly should be.  Instead, it’s an app that runs on Windows 7.  It runs full-screen and you can use this configuration to conceal the fact that Windows is under it, but eventually something will cause you to abandon that masquerade (like Patch Tuesday). The next version of Windows Home Server won’t, in my opinion, be an appliance either.  Now that the Drive Extender technology is gone, and users can’t just add and remove drives into and from a single storage pool, the product is much more like a IT server and less like an appliance-premised one.  Much has been written about this decision by Microsoft.  I’ll just sum it up in one word: pity. Microsoft doesn’t have anything remotely appliance-like in the tablet category, either.  Until it does, it likely won’t have much market share in that space either.  And of course, the bulk of Microsoft’s product catalog on the business side is geared to enterprise machines and not personal appliances. Appliance DNA: They Gotta Have It. The consumerization of IT is real, because businesspeople are consumers too.  They appreciate the fit and finish of appliances at home, and they increasingly feel entitled to have it at work too.  Secure and reliable push email in a smartphone is necessary, but it isn’t enough.  People want great apps and a pleasurable user experience too.  The full Microsoft Office product is needed at work, but a PC with a keyboard and mouse, or maybe a touch screen that uses a stylus (or requires really small fingers), to run Office isn’t enough either.  People want a flawless touch experience available for the times they want to read and take quick notes.  Until Microsoft realizes this fully and internalizes it, it will suffer defeats in the consumer market and even setbacks in the business market.  Think about how slow the Office upgrade cycle is…now imagine if the next version of Office had a first-class alternate touch UI and consider the possible acceleration in adoption rates. Can Microsoft make the appliance switch?  Can the appliance mentality become pervasive at the company?  Can Microsoft hasten its release cycles dramatically and shed the “some assembly required” paradigm upon which many of its products are based?  Let’s face it, the chances that Microsoft won’t make this transition are significant. But there are also encouraging signs, and they should not be ignored.  The appliances we have already discussed, especially Xbox, Zune and Windows Phone 7, are the most obvious in this regard.  The fact that SQL Server has an appliance SKU now is a more subtle but perhaps also more significant outcome, because that product sits so smack in the middle of Microsoft’s enterprise stack.  Bing is encouraging too, especially given its integrated travel, maps and augmented reality capabilities.  As Bing gains market share, Microsoft has tangible proof that it can transform and win, even when everyone outside the company, and many within it, would bet otherwise. That Great Big Appliance in the Sky Perhaps the most promising (and evolving) proof points toward the appliance mentality, though, are Microsoft’s cloud offerings -- Azure and BPOS/Office 365.  While the cloud does not represent a physical appliance (quite the opposite in fact) its ability to make acquisition, deployment and use of technology simple for the user is absolutely an embodiment of the appliance mentality and spirit.  Azure is primarily a platform as a service offering; it doesn’t just provide infrastructure.  SQL Azure does likewise for databases.  And Office 365 does likewise for SharePoint, Exchange and Lync. You don’t administer, tune and manage servers; instead, you create databases or site collections or mailboxes and start using them. Upgrades come automatically, and it seems like releases will come more frequently.  Fault tolerance and content distribution is just there.  No muss.  No fuss.  You use these services; you don’t have to set them up and think about them.  That’s how appliances work.  To me, these signs point out that Microsoft has the full capability of transforming itself.  But there’s a lot of work ahead.  Microsoft may say they’re “all in” on the cloud, but the majority of the company is still oriented around its old products and models.  There needs to be a wholesale cultural transformation in Redmond.  It can happen, but product management, program management, the field and executive ranks must unify in the effort. So must partners, and even customers.  New leaders must rise up and Microsoft must be able to see itself as a winner.  If Microsoft does this, it could lock-in decades of new success, and be a standard business school case study for doing so.  If not, the company will have missed an opportunity, and may see its undoing.

    Read the article

  • Gnome Do does not autostart and save shortcuts

    - by Matt
    For some reason the autostart of Gnome-Do will not work in 11.10. I've installed Gnome-Do via the Ubuntu Software Center. Then I changed the shortcut to launch Gnome-Do and marked the option to autostart Gnome-Do within Gnome-Do. In order to verify the autostart, I checked whether it's also found in the autostart applications (which it was). However, upon every restart I have to start Gnome-Do manually via the unity launcher and change the shortcut again.

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >