Search Results

Search found 6524 results on 261 pages for 'the ever kid'.

Page 126/261 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • How to change CapsLock key to produce "a"?

    - by Pit
    While typing I often hit the CapsLock key instead of the a key. (QWERTZU keyboard) This is quite annoying because the moment I realise that I hit the wrong key, I will have to delete multiple character/lines of text an rewrite them in the right form. I am searching for a way to prevent this. I have found a possibility to disable the CapsLock key in Keyboard Layout Options. But this would in my case mean that instead of writing an a I would write nothing. Positive - I don't have to rewrite a whole line, but only one character Negative - It's not that obvious that I hit the wrong key, as a missing character is not perceivable as an upper-case line of text. I would therefore prefer a possibility to map CapsLock to a . Thus when hitting CapsLock an a character would be written. Positive - If I hit CapsLock instead of a I get the output I actually wanted to type. Negative - If I hit CapsLock in any other context I will get an a character. As I don't ever intentionally use the CapsLock key this would not really pose a problem. (I think, or does it?) My Question: So how do I change to a ? And is there any case where this could be dangerous/provoke unwanted behaviour?

    Read the article

  • How can I strip down Ubuntu?

    - by Thomas Owens
    I'm trying to fix what I consider a bloated install of Ubuntu. When I install Ubuntu on a machine, I get things that I don't want - web browsers, office applications, media players, accessibility utilities, Ubuntu One, and so on. My goal is to create a way that I can have an install of Ubuntu that contains only the most minimal packages - the administrative tools and package manager, a GUI (my preference would be GNOME), a text editor, core drivers (video cards, network cards - wired and wireless, input devices), and anything else that I have to have to run a stable distribution. From there, I would like to pick and choose which packages I install to create my own customized system. After playing around with other distros like Arch and Slackware, like how they provide a barebones install by default. However, I get trapped in a "configuration hell" - right now, I tried moving away from Ubuntu and to Arch, but after spending 6 hours with it, I still don't have a usable system. It's half configured and I don't have any usable software packages to enable me to work. Is anything that can help me available? Either something like the OpenSUSE builder that lets you choose applications and packages for the CD, an advanced installation mode where I can choose the packages to install and which to ignore, or a guide on how to strip Ubuntu down to its bare bones? And I suppose a natural follow up to this is once I have a stripped down Ubuntu, will this affect updating at all? When Canonical releases the next version of Ubuntu, I don't want any bloatware reinstalled. And yes, most of the applications that come with Ubuntu, I simply don't use. Ever.

    Read the article

  • Linux Journal Best Virtualization Solution Readers' Choice 2012

    - by Chris Kawalek
    I'm proud to report that in the latest issue of Linux Journal their readers named Oracle VM VirtualBox the "Best Virtualization Solution" for 2012. We're excited to receive this honor and want to thank Linux Journal and their readers for recognizing us!  This is the latest award won by Oracle VM VirtualBox, following a 2011 Bossie Award (best open source software) from InfoWorld, a 2012 Readers' Choice award from Virtualization Review, and several others. These awards help us know that people are using Oracle VM VirtualBox in their day to day work and that it's really useful to them. We truly appreciate their (your!) support. If you already use Oracle VM VirtualBox, you will know all this. But, just in case you haven't tried it yet, here's a few reasons you should download it: Free for personal use and open source. You can download it in minutes and start running multiple operating systems on your Windows PC, Mac, Oracle Solaris system, or Linux PC. It's fast and powerful, and easy to install and use. It has in-depth support for client technologies like USB, virtual CD/DVD, virtual display adapters with various flavors of 2D and 3D acceleration, and much more. If you've ever found yourself in a situation where you were concerned about installing a piece of software because it might be too buggy, or wanted to have a dedicated system to test things on, or wanted to run Windows on a Mac or Oracle Solaris on a PC (or hundreds of other combinations!), or didn't want to install your company's VPN software directly on your home system, then you should definitely give Oracle VM VirtualBox a try. Once you install it, you'll find a myriad of other uses, too. Thanks again to the readers of Linux Journal for selecting Oracle VM VirtualBox as the Best Virtualization Solution for 2012. If you'd like to read the whole article, you can purchase this month's issue over at the Linux Journal website. -Chris

    Read the article

  • Register now for the UK Windows Azure Self-paced Interactive Learning Course starting May 10th

    - by Eric Nelson
    [Suggested twitter tag #selfpacedazure] We (myself and David Gristwood) have been working in the UK to create a fantastic opportunity to get yourself up to speed on the Windows Azure Platform over a 6 week period starting May 10th – without ever needing to leave the comfort of your home/office.  The course is derived from the internal training Microsoft gives on Azure which is both fun and challenging in equal parts – and we felt was just too good to keep to ourselves! We will be releasing more details nearer the date but hopefully the following is enough to convince you to register and … recommend it to a colleague or three :-) What we have produced is the “Microsoft Azure Self-paced Learning Course”. This is a free, interactive, self-paced, technical training course covering the Windows Azure platform – Windows Azure, SQL Azure and the Azure AppFabric. The course takes place over a six week period finishing on June 18th. During the course you will work from your own home or workplace, and get involved via interactive Live Meetings session, watch on-line videos, work through hands-on labs and research and complete weekly coursework assignments. The mentors and other attendees on the course will help you in your research and learning, and there are weekly Live Meetings where you can raise questions and interact with them. This is a technical course, aimed at programmers, system designers, and architects who want a solid understanding of the Microsoft Windows Azure platform, hence a prerequisite for this course is at least six months programming in the .NET framework and Visual Studio. Check out the full details of the event or go straight to registration.   The course outline is: Week 1 - Windows Azure Platform Week 2 - Windows Azure Storage Week 3 - Windows Azure Deep Dive and Codename "Dallas" Week 4 - SQL Azure Week 5 - Windows Azure Platform AppFabric Access Control Week 6 - Windows Azure Platform AppFabric Service Bus If you have any questions about the course and its suitability, please email [email protected].

    Read the article

  • Give Us Your Thoughts About Oracle Designs

    - by Tom Caldecott-Oracle
    Participate in the Onsite Usability Lab at Oracle OpenWorld 2014 Want to impress your colleagues? Your manager? Your mom? Imagine being able to say to them, “So, did I ever tell you about the time I helped Oracle design some of their hot applications?” Yes, that opportunity is coming up—at Oracle OpenWorld.  The Oracle Applications User Experience team will host an onsite usability lab at the 2014 conference. You can participate and give us your thoughts about proposed designs for Oracle Human Capital Management Cloud and Oracle Sales Cloud, Oracle Fusion Applications for procurement and supply chain, Oracle E-Business Suite and PeopleSoft applications, social relationship management, BI applications, Oracle Fusion Middleware, and more.  Your feedback will directly affect the usability of Oracle applications to make them intuitive, easy to use. You’ll make a difference. And that should score you points with peers, friends, and family. Of course, for your mom, first you’ll probably have to explain to her again what you do for a living. If you’re interested in participating, you must sign up in advance. Space is limited. Participation requires your company or organization to have a Customer Participation Confidentiality Agreement (CPCA) on file. Don’t have one? Let us know, and we’ll start the process. Sign up now for the onsite usability lab. When?  Monday, September 29 - Wednesday, October 1, 2014  Where?  InterContinental San Francisco Want to know about other Oracle Applications User Experience activities at Oracle OpenWorld? Visit UsableApps.

    Read the article

  • Are copyright notices really required?

    - by Alasdair
    Ever since I made my first web page 13 years ago I have followed the pattern of showing a copyright notice in the footer of each page. Over the years the format of this notice has changed in the following way; Copyright © <NAME> yyyy. All rights are reserved. Copyright © <NAME> yyyy © yyyy <NAME> © <NAME> This has generally mirrored the format used by Google. However, I recently noticed that they no longer display a copyright notice on their home page nor have one in their source code/meta tags. I see they still display it on most (if not all) other pages. I understand that Google are very keen to keep the word count down on their homepage, which could be the reason for this sacrafice, but my question is more general and relates to all websites. Since I've always just done it out of habit, I'm hoping someone can explain if/when I a copyright notice is actually required to protect your content and rights. Also, when it is required, is there a format in which the notice must adhere to in order to be valid?

    Read the article

  • Ask the Readers: What Tools Do You Use to Score Great Deals Online?

    - by Jason Fitzpatrick
    The internet has made scoring awesome deals a cinch—but only if you have the right tools and know where to look. This week we want to hear about your favorite tools for scoring the deepest discounts during your online shopping adventures. What we’re most interested in is the tools you use: browser plugins, bookmarklets, and other tools that help you stay on top of price drops and other deal-related information. So let’s hear about it in the comments! What tools do you use to score great deals online? We’ll read all your comments, gather quotes, and share the collective wisdom of the How-To Geek crowd in a follow-up What You Said post on Friday. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video]

    Read the article

  • How do you decide what kind of database to use?

    - by Jason Baker
    I really dislike the name "NoSQL", because it isn't very descriptive. It tells me what the databases aren't where I'm more interested in what the databases are. I really think that this category really encompasses several categories of database. I'm just trying to get a general idea of what job each particular database is the best tool for. A few assumptions I'd like to make (and would ask you to make): Assume that you have the capability to hire any number of brilliant engineers who are equally experienced with every database technology that has ever existed. Assume you have the technical infrastructure to support any given database (including available servers and sysadmins who can support said database). Assume that each database has the best support possible for free. Assume you have 100% buy-in from management. Assume you have an infinite amount of money to throw at the problem. Now, I realize that the above assumptions eliminate a lot of valid considerations that are involved in choosing a database, but my focus is on figuring out what database is best for the job on a purely technical level. So, given the above assumptions, the question is: what jobs are each database (including both SQL and NoSQL) the best tool for and why?

    Read the article

  • I Blame SNMP!

    - by brendonpage
    Anyone who has been reading my blog would have noticed that I have deviated slightly from my original post plan! This post was meant to be on uploading files in Silverlight, so what happened you may ask? Well last weekend I had some friends over for a LAN and one of them brought a managed switch with, which he had just been purchased for work. He proceeded show me how cool it was, how he planned on improving his work network and how it can be monitored remotely via SNMP. After this explanation he started to google for a free SNMP graphing tool. After a few hours of hearing disgruntled mutterings from him I asked what was wrong, he proceeded to rant about how he couldn’t find any tools that suited his needs. It was at this point I though the most dangerous thing a programmer can ever think “I wonder how hard it would be to make one”, of course the answer at the time is always “It can’t be that hard”, and so started my journey into SNMP. I am still in the early stages of this journey so I don’t have to much to report yet, but once I have finished the first version of my SNMP graphing tool I will definitely be posting about it! For now if there are any of you who are interested in doing any SNMP development in C# I would recommend looking at the #Sharp project on CodePlex (http://sharpsnmplib.codeplex.com/), it is the SNMP library I have decided to use and thus far it works beautifully.

    Read the article

  • What You Said: Desktop vs. Web-based Email Cients

    - by Jason Fitzpatrick
    We clearly tapped into a subject you all have a strong opinion about with this week’s Ask the Readers post; read on to see how your fellow readers manage their email on, off, and across desktops and devices. Earlier this week we asked you to share your email workflow and you all responded in force. TusconMatt doesn’t miss desktop clients one bit: Switched to Gmail years ago and never looked back. No more losing my emails and contacts if my HDD crashes or when I reinstall. No more frustration with not being able to access an email on the road because it downloaded to my drive and deleted from the server. No more mailbox full messages because I left messages on the server to avoid the above problem! I love having access to all emails from anywhere on any platform and don’t think I could ever go back to a dedicated email client. How To Play DVDs on Windows 8 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives?

    Read the article

  • Many Different Things Rolled into a Ball

    - by MOSSLover
    Yeah I know I don’t blog much anymore, because life has taken me places that don’t involve the interwebs unfortunately.  I am in the midst of planning two events, starting a non for profit, creating more sessions for various conferences, submitting to various conferences, working a 40 hour a week job, attempting to hang out with boyfriend/friends/family.  So you can see that list does not include this blog sadly that’s how it goes sometimes.  The bottom piece very important over any of the top pieces.  I haven’t seen St. Louis in a while and I get to go back.  I was gone from home for MVP Summit and Best Practices Conference, so the boyfriend and cat didn’t get to see me either for a bit.  Then you have to add in the whole toilet being broken fiasco this week.  Maintenance really thought it would be cool to turn off the ability to flush.  I mean who does that?  Then when we call the owner he comes by turns it on and we figure it was an accident, because well the next day no one came by to tell us there was a leak.  It was all kinds of strangeness and involved me running to other people’s toilets.  As Dan Usher would say, I was a sad panda for a few days.  So I guess I wanted to post a few thoughts here just because I can.  I do not like multiple content editor webparts embedded with html files in numerous pages doing the same thing.  I will tell you why I don’t like these particular webparts and the way they are being used.  First off if you have a bunch of pages with script includes it’s about time you should just dump them into the masterpage.  Why bother finding all 20 pages and changing those pages when you can just use a single masterpage that already exists? The other thing that is bothering me days is screen scraping.  Just don’t do it, because in 2010 you will find the UI is substantially slower.  I understand you are new and you have no idea what to do.  You are also using 2007 am I right?  So then you need to go to codeplex.com and type in a search for SPServices.  Download it, use it, love it and then have it’s babies (well maybe don’t go so far this is not the GRID in Tron). If you have a ton of constants in your code why did you not go in and create a webpart with a bunch of properties and/or link to a configuration list hidden in the browser?  This type of property and list could help you out in the long run.  The power users and administrators can now change the control without you having to compile it over and over again.  It’s good stuff.  Also, you can change the control without compiling it, especially in 2007 where you have to do a farm solution.  In 2010 you can do a sandbox solution I guess, but shouldn’t you make it as easy and supportable as possible for other users? In conclusion I’m an angry person when it comes to viewing something repeatedly and analyzing it in a system.  Now we will move on to the next topic…MVP Summit…So yeah I can’t really talk about particulars, but I can talk about my experience as a person.  Don’t build something up to be cooler than it is only to be dropped from your 10,000 foot perch.  My experience was great, but the content overall was something to be desired.  It’s ok I got to meet a lot of people I would not have met if I had not gone.  Some of it was surreal, such as product group members showing up and talking to us.  It was pretty neat.  Plus I never had the chance to get to that mythical MS Office in Redmond.  Prior to Summit it was like Rainbow Brites unicorn trying taunting me on television when I was a kid.  So I guess with all that said I give it a B.  It was awesome in some way, but lacking in other ways.  The cool part is that I got to go.  Would I have lived without going? Yes, but it was still cool. I could prattle on about other things and make this post massive, but I’m going to pass and give myself a piece of Sunday to play Rockband and do 800 other things.  I hope the two of you who read this blog are well.  I’ll catch you all at another juncture.  Have a good weekend and varying holidays in between. Technorati Tags: SharePoint,MVP Summit,JQuery,Javascript

    Read the article

  • Debugging Windows Service Timeout Error 1053

    - by Joe Mayo
    If you ever receive an Error 1053 for a timeout when starting a Windows Service you've written, the underlying problem might not have anything to do with a timeout.  Here's the error so you can compare it to what you're seeing: --------------------------- Services --------------------------- Windows could not start the Service1 service on Local Computer.   Error 1053: The service did not respond to the start or control request in a timely fashion. --------------------------- OK   --------------------------- Searching the Web for a solution to this error in your Windows Service will result in a lot of wasted time and advice that won't help.  Sometimes you might get lucky if your problem is exactly the same as someone else's, but this isn't always the case.  One of the solutions you'll see has to do with a known error on Windows Server 2003 that's fixed by a patch to the .NET Framework 1.1, which won't work.  As I write this blog post, I'm using the .NET Framework 4.0, which is a tad bit past that timeframe. Most likely, the basic problem is that your service is throwing an exception that you aren't handling.  To debug this, wrap your service initialization code in a try/catch block and log the exception, as shown below: using System; using System.Diagnostics; using System.ServiceProcess; namespace WindowsService { static class Program { static void Main() { try { ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new Service1() }; ServiceBase.Run(ServicesToRun); } catch (Exception ex) { EventLog.WriteEntry("Application", ex.ToString(), EventLogEntryType.Error); } } } } After you uninstall the old service, redeploy the service with modifications, and receive the same error message as above; look in the Event Viewer/Application logs.  You'll see what the real problem is, which is the underlying reason for the timeout. Joe

    Read the article

  • Applications: The mathematics of movement, Part 1

    - by TechTwaddle
    Before you continue reading this post, a suggestion; if you haven’t read “Programming Windows Phone 7 Series” by Charles Petzold, go read it. Now. If you find 150+ pages a little too long, at least go through Chapter 5, Principles of Movement, especially the section “A Brief Review of Vectors”. This post is largely inspired from this chapter. At this point I assume you know what vectors are, how they are represented using the pair (x, y), what a unit vector is, and given a vector how you would normalize the vector to get a unit vector. Our task in this post is simple, a marble is drawn at a point on the screen, the user clicks at a random point on the device, say (destX, destY), and our program makes the marble move towards that point and stop when it is reached. The tricky part of this task is the word “towards”, it adds a direction to our problem. Making a marble bounce around the screen is simple, all you have to do is keep incrementing the X and Y co-ordinates by a certain amount and handle the boundary conditions. Here, however, we need to find out exactly how to increment the X and Y values, so that the marble appears to move towards the point where the user clicked. And this is where vectors can be so helpful. The code I’ll show you here is not ideal, we’ll be working with C# on Windows Mobile 6.x, so there is no built-in vector class that I can use, though I could have written one and done all the math inside the class. I think it is trivial to the actual problem that we are trying to solve and can be done pretty easily once you know what’s going on behind the scenes. In other words, this is an excuse for me being lazy. The first approach, uses the function Atan2() to solve the “towards” part of the problem. Atan2() takes a point (x, y) as input, Atan2(y, x), note that y goes first, and then it returns an angle in radians. What angle you ask. Imagine a line from the origin (0, 0), to the point (x, y). The angle which Atan2 returns is the angle the positive X-axis makes with that line, measured clockwise. The figure below makes it clear, wiki has good details about Atan2(), give it a read. The pair (x, y) also denotes a vector. A vector whose magnitude is the length of that line, which is Sqrt(x*x + y*y), and a direction ?, as measured from positive X axis clockwise. If you’ve read that chapter from Charles Petzold’s book, this much should be clear. Now Sine and Cosine of the angle ? are special. Cosine(?) divides x by the vectors length (adjacent by hypotenuse), thus giving us a unit vector along the X direction. And Sine(?) divides y by the vectors length (opposite by hypotenuse), thus giving us a unit vector along the Y direction. Therefore the vector represented by the pair (cos(?), sin(?)), is the unit vector (or normalization) of the vector (x, y). This unit vector has a length of 1 (remember sin2(?) + cos2(?) = 1 ?), and a direction which is the same as vector (x, y). Now if I multiply this unit vector by some amount, then I will always get a point which is a certain distance away from the origin, but, more importantly, the point will always be on that line. For example, if I multiply the unit vector with the length of the line, I get the point (x, y). Thus, all we have to do to move the marble towards our destination point, is to multiply the unit vector by a certain amount each time and draw the marble, and the marble will magically move towards the click point. Now time for some code. The application, uses a timer based frame draw method to draw the marble on the screen. The timer is disabled initially and whenever the user clicks on the screen, the timer is enabled. The callback function for the timer follows the standard Update and Draw cycle. private double totLenToTravelSqrd = 0; private double startPosX = 0, startPosY = 0; private double destX = 0, destY = 0; private void Form1_MouseUp(object sender, MouseEventArgs e) {     destX = e.X;     destY = e.Y;     double x = marble1.x - destX;     double y = marble1.y - destY;     //calculate the total length to be travelled     totLenToTravelSqrd = x * x + y * y;     //store the start position of the marble     startPosX = marble1.x;     startPosY = marble1.y;     timer1.Enabled = true; } private void timer1_Tick(object sender, EventArgs e) {     UpdatePosition();     DrawMarble(); } Form1_MouseUp() method is called when ever the user touches and releases the screen. In this function we save the click point in destX and destY, this is the destination point for the marble and we also enable the timer. We store a few more values which we will use in the UpdatePosition() method to detect when the marble has reached the destination and stop the timer. So we store the start position of the marble and the square of the total length to be travelled. I’ll leave out the term ‘sqrd’ when speaking of lengths from now on. The time out interval of the timer is set to 40ms, thus giving us a frame rate of about ~25fps. In the timer callback, we update the marble position and draw the marble. We know what DrawMarble() does, so here, we’ll only look at how UpdatePosition() is implemented; private void UpdatePosition() {     //the vector (x, y)     double x = destX - marble1.x;     double y = destY - marble1.y;     double incrX=0, incrY=0;     double distanceSqrd=0;     double speed = 6;     //distance between destination and current position, before updating marble position     distanceSqrd = x * x + y * y;     double angle = Math.Atan2(y, x);     //Cos and Sin give us the unit vector, 6 is the value we use to magnify the unit vector along the same direction     incrX = speed * Math.Cos(angle);     incrY = speed * Math.Sin(angle);     marble1.x += incrX;     marble1.y += incrY;     //check for bounds     if ((int)marble1.x < MinX + marbleWidth / 2)     {         marble1.x = MinX + marbleWidth / 2;     }     else if ((int)marble1.x > (MaxX - marbleWidth / 2))     {         marble1.x = MaxX - marbleWidth / 2;     }     if ((int)marble1.y < MinY + marbleHeight / 2)     {         marble1.y = MinY + marbleHeight / 2;     }     else if ((int)marble1.y > (MaxY - marbleHeight / 2))     {         marble1.y = MaxY - marbleHeight / 2;     }     //distance between destination and current point, after updating marble position     x = destX - marble1.x;     y = destY - marble1.y;     double newDistanceSqrd = x * x + y * y;     //length from start point to current marble position     x = startPosX - (marble1.x);     y = startPosY - (marble1.y);     double lenTraveledSqrd = x * x + y * y;     //check for end conditions     if ((int)lenTraveledSqrd >= (int)totLenToTravelSqrd)     {         System.Console.WriteLine("Stopping because destination reached");         timer1.Enabled = false;     }     else if (Math.Abs((int)distanceSqrd - (int)newDistanceSqrd) < 4)     {         System.Console.WriteLine("Stopping because no change in Old and New position");         timer1.Enabled = false;     } } Ok, so in this function, first we subtract the current marble position from the destination point to give us a vector. The first three lines of the function construct this vector (x, y). The vector (x, y) has the same length as the line from (marble1.x, marble1.y) to (destX, destY) and is in the direction pointing from (marble1.x, marble1.y) to (destX, destY). Note that marble1.x and marble1.y denote the center point of the marble. Then we use Atan2() to get the angle which this vector makes with the positive X axis and use Cosine() and Sine() of that angle to get the unit vector along that same direction. We multiply this unit vector with 6, to get the values which the position of the marble should be incremented by. This variable, speed, can be experimented with and determines how fast the marble moves towards the destination. After this, we check for bounds to make sure that the marble stays within the screen limits and finally we check for the end condition and stop the timer. The end condition has two parts to it. The first case is the normal case, where the user clicks well inside the screen. Here, we stop when the total length travelled by the marble is greater than or equal to the total length to be travelled. Simple enough. The second case is when the user clicks on the very corners of the screen. Like I said before, the values marble1.x and marble1.y denote the center point of the marble. When the user clicks on the corner, the marble moves towards the point, and after some time tries to go outside of the screen, this is when the bounds checking comes into play and corrects the marble position so that the marble stays inside the screen. In this case the marble will never travel a distance of totLenToTravelSqrd, because of the correction is its position. So here we detect the end condition when there is not much change in marbles position. I use the value 4 in the second condition above. After experimenting with a few values, 4 seemed to work okay. There is a small thing missing in the code above. In the normal case, case 1, when the update method runs for the last time, marble position over shoots the destination point. This happens because the position is incremented in steps (which are not small enough), so in this case too, we should have corrected the marble position, so that the center point of the marble sits exactly on top of the destination point. I’ll add this later and update the post. This has been a pretty long post already, so I’ll leave you with a video of how this program looks while running. Notice in the video that the marble moves like a bot, without any grace what so ever. And that is because the speed of the marble is fixed at 6. In the next post we will see how to make the marble move a little more elegantly. And also, if Atan2(), Sine() and Cosine() are a little too much to digest, we’ll see how to achieve the same effect without using them, in the next to next post maybe. Ciao!

    Read the article

  • Recovering from 'grub rescue>' crash

    - by DocSalvage
    I did a dumb thing... I forgot that Ubuntu 10.04 (Lucid) switched to Grub2 which puts a ton of *.mod files (kernel modules) in /boot/grub. I thought they were soundtrack files put there erroneously and moved them. Needless to say, the next reboot was traumatic. I was presented with something I had no memory of ever seeing... a 'grub rescue' prompt. With the help of how-to-fix-error-unknown-filesystem-grub-rescue however, I was able to recover... Discovered that Grub Rescue does not have 'cd', 'cp' or any other filesystem commands except its own variation of 'ls'. So first I had to find the partition with the /boot directory containing vmlinuz... and other boot image files... (failed attempts not shown) grub rescue> ls (hd0,4) (hd0,3) (hd0,2) (hd0,1) grub rescue> ls (hd0,2)/boot ... grub ... initrd.img-2.6.32-33-generic ... vmlinuz-2.6.32-33-generic Then manually boot from 'grub rescue' prompt (no command history either!)... grub rescue> set root=(hd0,2)/boot grub rescue> insmod linux grub rescue> linux (hd0,2)/boot/vmlinuz-2.6.32-33-generic grub rescue> initrd (hd0,2)/boot/initrd.img-2.6.32-33-generic grub rescue> boot This boots and crashes to the BusyBox prompt which DOES have some rudimentary filesystem commnds. Then I moved the *.mod files back to the /boot/grub directory... busybox> cd /boot busybox> mv mod/* grub busybox> reboot The reboot was successful but that was a lot of work. Is there an easier way?

    Read the article

  • Content Flood and Frustration? ==> Content Salvation via WebCenter

    - by Michael Snow
    If you are still stuck on Documentum and somehow have missed hearing about our Move-Off Documentum program. Here’s a refresher on the basics to help save you from your ongoing frustration. Check out Capitalizing on Content How much content have you pushed around over email, through collaboration, and via social channels this week? Have you been engaged with a process that has been smooth, problem-free, and leaves you with a lasting impression of a great user experience? How are you managing your online presence to ensure that your customers, partners, employees and potential future customers are experiencing a consistently engaging web experience? You really can’t do this without a solid Web Experience Management platform. Learn more from the CITO Research White Paper: Creating a Successful and Meaningful Customer Experience on the Web If you'd like to catch up on what Oracle WebCenter has been doing lately - check out the latest recording now available OnDemand: March 2012 Quarterly Customer Update Webcast. Amazing how much can be done in a day on the internet. It’s even more impressive to see this presented in an infographic like the one below by mbaonline.com. How much longer can you manage the ever increasing volumes of content without some technology assistance?  Created by: MBAOnline.com

    Read the article

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Exalytics Increases Customer Revenue, and Saves Time, Risk & Cost

    - by Mike.Hallett(at)Oracle-BI&EPM
    We are getting some great proof point stories now from our customers who are succeeding with the Exalytics in-memory system for OBI and Essbase.  See below for some recent testimony: San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings: according to independent assessment by Mainstay Salire, the district is on track to achieve substantial benefits from the Oracle Exalytics solution, including an $8.25 million increase in attendance revenue, $75,000 a year savings in operational efficiencies, and $1 million in hardware cost avoidance. NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI: it took only “3 days to get up and running” with Exalytics.  Video Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: “it was up and running within 4 days” with “more intuitive dashboards” and “up to 70x better performance” and “cheaper maintenance and lower total cost of ownership”. Video Sodexo chose Oracle Exalytics as their business analytics platform; accelerating Essbase “more than 8x” performance for more than 2,000 Excel-addin users, “significantly changing how people in information management now deal with data”.  Video Polk, Savvis, Nykredit, and Key Energy describe testing of the Oracle Exalytics In-Memory Machine: to “reach more users than we ever have before”, “to fly through the data without impeding the analytic process”, “drive our enterprise groups into this tool instead of having departmental solutions”, and the “advanced visualisation this product enables”.  Video

    Read the article

  • OpenWorld General Session 2012: Middleware & JavaOne

    - by JuergenKress
    In this general session, listen how developers leverage new innovations in their applications and customers achieve their business innovation goals with Oracle Fusion Middleware. We uploaded the key Fusion Middleware presentations (ppt format) in our SOA Community Workspace OFM OOW2012.pptx BPM Preview of Oracle BPM PS6.ppt and (Oracle Partner confidential) Please visit our SOA Community Workspace (SOA Community membership required). Read our First feedback from our ACE Directors: Guido Schmutz: My presentations at Oracle OpenWorld 2012 Lucas Jellema: OOW 2012 – Larry Ellison’s Keynote Announcements: Exa, Cloud, Database And from Antony Reynolds Many tweets #soacommunity with the latest OOW information have been posted on twitter. The First impressions are posted on our facebook page. Thanks for the excellent Java One Summary from Amis JavaOne 2012: Strategy and Technical Keynote and Dustin JavaOne 2012: JavaOne Technical Keynote. As a summary JavaOne 2012 was a successful event and Java is back alive and more successful than ever before – make the future Java! IDC confirms it in their latest report: Java 2,5 years after the acquisition – IDC report“. As a result, Java made more significant advancements after the Sun acquisition than in the two and half years prior to the acquisition. The Java ecosystem is healthy and remains on a growing trajectory,” WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: OOW,JavaOne,presentations,video,keynote,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • SQL SERVER – Storing Variable Values in Temporary Array or Temporary List

    - by pinaldave
    SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case. Let us say here are the values: a, 10, 20, c, 30, d. Now the requirement is to store them in a array or list. It is very easy to do the same in C# or C. However, there is no quick way to do the same in SQL Server. Every single time when I get such requirement, I create a table variable and store the values in the table variables. Here is the example: For SQL Server 2012: DECLARE @ListofIDs TABLE(IDs VARCHAR(100)); INSERT INTO @ListofIDs VALUES('a'),('10'),('20'),('c'),('30'),('d'); SELECT IDs FROM @ListofIDs; GO When executed above script it will give following resultset. Above script will work in SQL Server 2012 only for SQL Server 2008 and earlier version run following code. DECLARE @ListofIDs TABLE(IDs VARCHAR(100), ID INT IDENTITY(1,1)); INSERT INTO @ListofIDs SELECT 'a' UNION ALL SELECT '10' UNION ALL SELECT '20' UNION ALL SELECT 'c' UNION ALL SELECT '30' UNION ALL SELECT 'd'; SELECT IDs FROM @ListofIDs; GO Now in this case, I have to convert numbers to varchars because I have to store mix datatypes in a single column. Additionally, this quick solution does not give any features of arrays (like inserting values in between as well accessing values using array index). Well, do you ever have to store temporary multiple values in SQL Server – if the count of values are dynamic and datatype is not specified early how will you about storing values which can be used later in the programming. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Server-side Architecture for Online Game

    - by Draiken
    Hi, basically I have a game client that has communicate with a server for almost every action it takes, the game is in Java (using LWJGL) and right now I will start making the server. The base of the game is normally one client communicating with the server alone, but I will require later on for several clients to work together for some functionalities. I've already read how authentication server should be sepparated and I intend on doing it. The problem is I am completely inexperienced in this kind of server-side programming, all I've ever programmed were JSF web applications. I imagine I'll do socket connections for pretty much every game communication since HTML is very slow, but I still don't really know where to start on my server. I would appreciate reading material or guidelines on where to start, what architecture should the game server have and maybe some suggestions on frameworks that could help me getting the client-server communication. I've looked into JNAG but I have no experience with this kind of thing, so I can't really tell if it is a solid and good messaging layer. Any help is appreciated... Thanks !

    Read the article

  • Mac OS needs Windows Live Writer &ndash; badly!

    - by digitaldias
    I recently bought a new  Macbook Pro (the 13” one) to dive into a new world of programming challenges as well as to get a more powerful netbook than my Packard Bell Dot which I’ve been using since last year. I’ve had immense pleasure using the netbook format and their small size in meetings (taking notes with XMind), surfing “anywhere”, and, of course blogging with windows live writer. So far the Mac is holding up, it’s sleek, responsive, and I’ve even begun looking at coding in Objective C with it, but in one arena, it is severely lacking: Blogging software. There is nothing that even comes close to Live Writer for getting your blog posts out. The few blogger applications that do exist on mac both look and feel medieval in comparison, AND some even cost money! It looks like some mac users actually install a virtual machine on their mac to run Windows XP just so they can use WLW. I’m not that extreme; instead, I’m hoping that the WLW team will write it’s awesome application as a Silverlight 4 app. That way, it would run on Mac and Windows (as a desktop app). I wonder if it will ever happen though…   PS: The image is of me, took it with the built-in camera on the mac and emailed it to the windows PC that I am writing on :)

    Read the article

  • How to hire support people?

    - by Martin
    I manage a tech support team at a mid-sized software company. We are the last line of support, so issues that we can't fix need to be escalated to the development team. When I joined the company, our team wasn't capable of much beyond using a specific set of troubleshooting steps to solve known issues and escalating anything else to the developers. It's always been a goal of mine for our team to shoulder as much of the support burden as possible without ever bothering a developer. Over the past few years, I, along with several new hires I've made, have made pretty good progress in that direction. We've coded our own troubleshooting tools which now ship with several of our products. When users have never-before-seen issues, we analyze stack traces and troubleshoot down to the code level, and if we need to submit a bug, half the time we've already identified in the code where in the code the bug is and offered a patch to fix it. Here's the problem I've always had: finding support people capable of the work I've described above is really difficult. I've hired 3 people in the past 3 years, and I've probably looked at several thousand resumes and conducted several hundred phone screens to do so. I know it's pretty well accepted that hiring good people is tough in the tech industry, but it seems that support is especially difficult -- there are clearly thousands of people walking around calling themselves support analysts, but 99%+ of them seemingly aren't capable of anything beyond reading a script. I'm curious if anyone has experience recruiting the sort of folks I'm talking about, and if you have any suggestions to share. We've tried all sorts of things -- different job titles/descriptions, using headhunters, etc. And while we've managed to hire a few good folks, it's basically taken us a year to find an appropriate candidate for each opening we've had, and I can't help but wonder if there's something we could be doing differently.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-20

    - by Bob Rhubart
    New book: Oracle WebLogic Server 12c: First Look Congratulations to Michel Schildmeijer on the publication of his new book. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. ODTUG Kscope12 - June 24-28 - San Antonio, TX San Antonio, TX, June 24-28, 2012 Kscope12, sponsored by ODTUG, is your home for Application Express, BI and Oracle EPM, Database Development, Fusion Middleware, and MySQL training by the best of the best! Eclipse and Oracle Fusion Development - Free Virtual Event, July 10th Get more out of Eclipse with these useful resources. How to Create Multiple Internal Repositories for Oracle Solaris 11 | Albert White Albert White shows you how to create and manage internal repositories for release, development, and support versions of Solaris 11. Social Technology and the Potential for Organic Business Networks | Michael Fauscette "An organic business network driven company is the antithesis of a hierarchical, rigid, reactive, process-constrained, and siloed organization." Cloud Bursting between AWS and Rackspace | High Scalability Nati Shalom explains "cloud bursting," an interesting hybrid cloud model. Born-again cloud advocates finally see the light | David Linthicum "I can't help but wish that we keep an open mind about the next technology evolution when it begins and get religion earlier," says Linthicum. How to know that a method was run, when you didn’t write that method | RedStack Middleware A-Team blogger Mark Nelson shares a useful tip for those working with ADF. Thought for the Day "There does not now, nor will there ever exist, a programming language in which it is the least bit hard to write bad programs." — L. Flon Source: SoftwareQuotes.com

    Read the article

  • MEF, IServiceProvider and Testing Visual Studio Extensions

    - by Daniel Cazzulino
    In the latest and greatest version of Visual Studio, MEF plays a critical role, one that makes extending VS much more fun than it ever was. So typically, you just [Export] something, and then someone [Import]s it and that's it. MEF in all its glory kicks in and gets all your dependencies satisfied. Cool, you say, so let's now import ITextTemplating and have some T4-based codegen going! Ah, if only it was that easy. Turns out by default, none of the VS built-in services are exposed to MEF, apparently because there wasn't enough time to analyze the lifetime, initialization, dependencies, etc. for each one before launch, which makes perfect sense. You don't want to blindly export everything now just in case. There's also the whole VS package initialization thing which in this version of VS is not so transparently integrated with the MEF publishing side (i.e. a MEF export from a package can get instantiated before its owning package, and in fact, the package can remain unloaded forever and the export will continue to be visible to anyone)....Read full article

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >