Search Results

Search found 617 results on 25 pages for 'billy jones'.

Page 5/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Exception Handling And Other Contentious Political Topics

    - by Justin Jones
    So about three years ago, around the time of my last blog post, I promised a friend I would write this post. Keeping promises is a good thing, and this is my first step towards easing back into regular blogging. I fully expect him to return from Pennsylvania to buy me a beer over this. However, it’s been an… ahem… eventful three years or so, and blogging, unfortunately, got pushed to the back burner on my priority list, along with a few other career minded activities. Now that the personal drama of the past three years is more or less resolved, it’s time to put a few things back on the front burner. What I consider to be proper exception handling practices is relatively well known these days. There are plenty of blog posts out there already on this topic which more or less echo my opinions on this topic. I’ll try to include a few links at the bottom of the post. Several years ago I had an argument with a co-worker who posited that exceptions should be caught at every level and logged. This might seem like sanity on the surface, but the resulting error log looked something like this: Error: System.SomeException Followed by small stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace.   These were all the same exception. The problem with this approach is that the error log, if you run any kind of analytics on in, becomes skewed depending on how far up the stack trace your exception was thrown. To mitigate this problem, we came up with the concept of the “PreLoggedException”. Basically, we would log the exception at the very top level and subsequently throw the exception back up the stack encapsulated in this pre-logged type, which our logging system knew to ignore. Now the error log looked like this: Error: System.SomeException Followed by small stack trace. Much cleaner, right? Well, there’s still a problem. When your exception happens in production and you go about trying to figure out what happened, you’ve lost more or less all context for where and how this exception was thrown, because all you really know is what method it was thrown in, but really nothing about who was calling the method or why. What gives you this clue is the entire stack trace, which we’re losing here. I believe that was further mitigated by having the logging system pull a system stack trace and add it to the log entry, but what you’re actually getting is the stack for how you got to the logging code. You’re still losing context about the actual error. Not to mention you’re executing a whole slew of catch blocks which are sloooooooowwwww……… In other words, we started with a bad idea and kept band-aiding it until it didn’t suck quite so bad. When I argued for not catching exceptions at every level but rather catching them following a certain set of rules, my co-worker warned me “do yourself a favor, never express that view in any future interviews.” I suppose this is my ultimate dismissal of that advice, but I’m not too worried. My approach for exception handling follows three basic rules: Only catch an exception if 1. You can do something about it. 2. You can add useful information to it. 3. You’re at an application boundary. Here’s what that means: 1. Only catch an exception if you can do something about it. We’ll start with a trivial example of a login system that uses a file. Please, never actually do this in production code, it’s just concocted example. So if our code goes to open a file and the file isn’t there, we get a FileNotFound exception. If the calling code doesn’t know what to do with this, it should bubble up. However, if we know how to create the file from scratch we can create the file and continue on our merry way. When you run into situations like this though, What should really run through your head is “How can I avoid handling an exception at all?” In this case, it’s a trivial matter to simply check for the existence of the file before trying to open it. If we detect that the file isn’t there, we can accomplish the same thing without having to handle in in a catch block. 2. Only catch an exception if you can do something about it. Continuing with the poorly thought out file based login system we contrived in part 1, if the code calls a Login(…) method and the FileNotFound exception is thrown higher up the stack, the code that calls Login must account for a FileNotFound exception. This is kind of counterintuitive because the calling code should not need to know the internals of the Login method, and the data file is an implementation detail. What makes more sense, assuming that we didn’t implement any of the good advice from step 1, is for Login to catch the FileNotFound exception and wrap it in a new exception. For argument’s sake we’ll say LoginSystemFailureException. (Sorry, couldn’t think of anything better at the moment.) This gives us two stack traces, preserving the original stack trace in the inner exception, and also is much more informative to the calling code. 3. Only catch an exception if you’re at an application boundary. At some point we have to catch all the exceptions, even the ones we don’t know what to do with. WinForms, ASP.Net, and most other UI technologies have some kind of built in mechanism for catching unhandled exceptions without fatally terminating the application. It’s still a good idea to somehow gracefully exit the application in this case if possible though, because you can no longer be sure what state your application is in, but nothing annoys a user more than an application just exploding. These unhandled exceptions need to be logged, and this is a good place to catch them. Ideally you never want this option to be exercised, but code as though it will be. When you log these exceptions, give them a “Fatal” status (e.g. Log4Net) and make sure these bugs get handled in your next release. That’s it in a nutshell. If you do it right each exception will only get logged once and with the largest stack trace possible which will make those 2am emergency severity 1 debugging sessions much shorter and less frustrating. Here’s a few people who also have interesting things to say on this topic:  http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx http://www.codeproject.com/Articles/9538/Exception-Handling-Best-Practices-in-NET I know there’s more but I can’t find them at the moment.

    Read the article

  • Dell XPS 15 L502x and Ubuntu 11.04 - HDMI output

    - by Jones
    Recently I've bought my dream's notebook, a Dell XPS 15 but since then this dream became a kind of endless nightmare. I'm almost getting crazy to make my graphic card driver work properly, but it seems to be just impossible. Yes, I have a 2GB NVIDIA GeForce GT 540m (Optimus) in it! It simply doesn't work. Every time I generate the xorg.conf Ubuntu hangs on while starting up, which forces me to remove this file to be able to start the notebook with the standard graphic settings. Another problem is that the Dell XPS 15 does NOT have a VGA output, but a HDMI. So, to be able to use a second monitor I have to configure it by the NVIDIA X Server Settings, which just works if the driver is properly initialized with the xorg.conf. I've also tried to make it work with the Bumblebee, but unfortunately it didn't help me much with the HDMI output. Do you guys have any idea to solve this deadlock? Is there any way for me to use my second monitor?

    Read the article

  • Why does the location of my vehicle spawner change when I open a matinee?

    - by Gareth Jones
    I'm doing work with InterActors and vehicle spawners in Unreal Tournament 3's editor, and have it set up like so: (The Walkway its on is the InterActor) However if I go in Kemsit and open the matinee that handles the InterActor, this happens: It does look to me like the editor is moving it out of the way so I can see the InterActor (which would be very clever) because only the image of the vehicle moves, not the gizmo, nor does the vehicle spawn in that location in game. Is this the case?

    Read the article

  • More on PHP and Oracle 11gR2 Improvements to Client Result Caching

    - by christopher.jones
    Oracle 11.2 brought several improvements to Client Result Caching. CRC is way for the results of queries to be cached in the database client process for reuse.  In an Oracle OpenWorld presentation "Best Practices for Developing Performant Application" my colleague Luxi Chidambaran had a (non-PHP generated) graph for the Niles benchmark that shows a DB CPU reduction up to 600% and response times up to 22% faster when using CRC. Sometimes CRC is called the "Consistent Client Cache" because Oracle automatically invalidates the cache if table data is changed.  This makes it easy to use without needing application logic rewrites. There are a few simple database settings to turn on and tune CRC, so management is also easy. PHP OCI8 as a "client" of the database can use CRC.  The cache is per-process, so plan carefully before caching large data sets.  Tables that are candidates for caching are look-up tables where the network transfer cost dominates. CRC is really easy in 11.2 - I'll get to that in a moment.  It was also pretty easy in Oracle 11.1 but it needed some tiny application changes.  In PHP it was used like: $s = oci_parse($c, "select /*+ result_cache */ * from employees"); oci_execute($s, OCI_NO_AUTO_COMMIT); // Use OCI_DEFAULT in OCI8 <= 1.3 oci_fetch_all($s, $res); I blogged about this in the past.  The query had to include a specific hint that you wanted the results cached, and you needed to turn off auto committing during execution either with the OCI_DEFAULT flag or its new, better-named alias OCI_NO_AUTO_COMMIT.  The no-commit flag rule didn't seem reasonable to me because most people wouldn't be specific about the commit state for a query. Now in Oracle 11.2, DBAs can now nominate tables for caching, either with CREATE TABLE or ALTER TABLE.  That means you don't need the query hint anymore.  As well, the no-commit flag requirement has been lifted.  Your code can now look like: $s = oci_parse($c, "select * from employees"); oci_execute($s); oci_fetch_all($s, $res); Since your code probably already looks like this, your DBA can find the top queries in the database and simply tune the system by turning on CRC in the database and issuing an ALTER TABLE statement for candidate tables.  Voila. Another CRC improvement in Oracle 11.2 is that it works with DRCP connection pooling. There is some fine print about what is and isn't cached, check the Oracle manuals for details.  If you're using 11.1 or non-DRCP "dedicated servers" then make sure you use oci_pconnect() persistent connections.  Also in PHP don't bind strings in the query, although binding as SQLT_INT is OK.

    Read the article

  • iOS UITab Bar Icons

    - by Richard Jones
    I’ve been struggling trying to create the icons that are used for different tabs on an iPhone app. The difficulty is, is that each image is a PNG,  but alpha channels are used to represent their selected state. I’ve just come across this fab tool, that takes all the pain away,  using 30x30 png images,  I was able to create a really nice set of icons. http://scottpenberthy.com/tab/   (Respect)

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • WSS 3.0 to SharePoint 2010: Tips for delaying the Visual Upgrade

    - by Kelly Jones
    My most recent project has been to migrate a bunch of sites from WSS 3.0 (SharePoint 2007) to SharePoint Server 2010.  The users are currently working with WSS 3.0 and Office 2003, so the new ribbon based UI in 2010 will be completely new.  My client wants to avoid the new SharePoint 2010 look and feel until they’ve had time to train their users, so we’ve been testing the upgrades by keeping them with the 2007 user interface. Permission to perform the Visual Upgrade One of the first things we noticed was the default permissions for who was allowed to switch the UI from 2007 to 2010.  By default, site collection administrators and site owners can do this.  Since we wanted to more tightly control the timing of the new UI, I added a few lines to the PowerShell script that we are using to perform the migration.  This script creates the web application, sets the User Policy, and then does a Mount-SPDatabase to attach the old 2007 content database to the 2010 farm.  I added the following steps after the Mount-SPDatabase step: #Remove the visual upgrade option for site owners # it remains for Site Collection administrators foreach ($sc in $WebApp.Sites){ foreach ($web in $sc.AllWebs){ #Visual Upgrade permissions for the site/subsite (web) $web.UIversionConfigurationEnabled = $false; $web.Update(); } } These script steps loop through each Site Collection in a particular web application ($WebApp) and then it loops through each subsite ($web) in the Site Collection ($sc) and disables the Site Owner’s permission to perform the Visual Upgrade. This is equivalent to going to the Site Collection administrator settings page –> Visual Upgrade and selecting “Hide Visual Upgrade”. Since only IT people have Site Collection administrator privileges, this will allow IT to control the timing of the new 2010 UI rollout. Newly created subsites Our next issue was brought to our attention by SharePoint Joel’s blog post last week (http://www.sharepointjoel.com/Lists/Posts/Post.aspx?ID=524 ).  In it, he lists some updates about the 2010 upgrade, and his fourth point was one that I hadn’t seen yet: 4. If a 2007 upgraded site has not been visually upgraded, the sites created underneath it will look like 2010 sites – While this is something I’ve been aware of, I think many don’t realize how this impacts common look and feel for master pages, and how it impacts good navigation and UI. As well depending on your patch level you may see hanging behavior in the list picker. The site and list creation Silverlight control in Internet Explorer is looking for resources that don’t exist in the galleries in the 2007 site, and hence it continues to spin and spin and eventually time out. The work around is to upgrade to SP1, or use Chrome or Firefox which won’t attempt to render the Silverlight control. When the root site collection is a 2007 site and has it’s set of galleries and the children are 2010 sites there is some strange behavior linked to the way that the galleries work and pull from the parent. Our production SharePoint 2010 Farm has SP1 installed, as well as the December 2011 Cumulative Update, so I think the “hanging behavior” he mentions won’t affect us. However, since we want to control the roll out of the UI, we are concerned that new subsites will have the 2010 look and feel, no matter what the parent site has. Ok, time to dust off my developer skills. I first looked into using feature stapling, but I couldn’t get that to work (although I’m pretty sure I had everything wired up correctly).  Then I stumbled upon SharePoint 2010’s web events – a great way to handle this. Using Visual Studio 2010, I created a new SharePoint project and added a Web Event Receiver: In the Event Receiver class, I used the WebProvisioned method to check if the parent site is a 2007 site (UIVersion = 3), and if so, then set the newly created site to 2007:   /// <summary> /// A site was provisioned. /// </summary> public override void WebProvisioned(SPWebEventProperties properties) { base.WebProvisioned(properties);   try { SPWeb curweb = properties.Web;   if (curweb.ParentWeb != null) {   //check if the parent website has the 2007 look and feel if (curweb.ParentWeb.UIVersion == 3) { //since parent site has 2007 look and feel // we'll apply that look and feel to the current web curweb.UIVersion = 3; curweb.Update(); } } } catch (Exception) { //TODO: Add logging for errors } }   This event is part of a Feature that is scoped to the Site Level (Site Collection).  I added a couple of lines to my migration PowerShell script to activate the Feature for any site collections that we migrate. Plan Going Forward The plan going forward is to perform the visual upgrade after the users for a particular site collection have gone through 2010 training. If we need to do several site collections at once, we’ll use a PowerShell script to loop through each site collection to update the sites to 2010.  If it’s just one or two, we’ll be using the “Update All Sites” button on the Visual Upgrade page for Site Collection Administrators. The custom code for newly created sites won’t need to be changed, since it relies on the UI version of the parent site.  If the parent is 2010, then the new site will look 2010.

    Read the article

  • SharePoint 2010: Taxonomy feature (Feature ID &quot;73EF14B1-13A9-416b-A9B5-ECECA2B0604C&quot;) has not been activated

    - by Kelly Jones
    I ran into an error message in SharePoint 2010 that took me a few minutes to figure out.  I was working on a demo of SharePoint 2010’s managed metadata and getting an error when I was adding a Managed Metadata column to a library.  A little Google research turned up this blog post: The Taxonomy feature (Feature ID "73EF14B1-13A9-416b-A9B5-ECECA2B0604C") has not been activated. As Michal Pisarek pointed out last June, you get the error because the Taxonomy feature isn’t activated.  Like Michal, I’m not sure how this happened to my installation, but the fix he documented works. (Activating the feature using STSADM)

    Read the article

  • Farmyard

    - by Richard Jones
    Moooooooo     For a while now we’ve been using Apple’s enterprise device app distribution mechanism.   This allows you to have a user, click on a URL on their iOS device and it pulls down a new version of an enterprise app. of of our servers. Its really nice,  have a look at - http://developer.apple.com/library/ios/#featuredarticles/FA_Wireless_Enterprise_App_Distribution/Introduction/Introduction.html   I’ve embedded this, into a check on application launch, that a web-service is called to detect a newer version of the software is available.  It then calls the URL to the App and a new version is deployed. You can alert users that a new App update is available by sending them a push notification.  See screenshot at the top. We send our push notifications out to users,  using a simple C# service.    The fun part is this.   You can instruct the push notification to play a sound (embedded in the app already). So our push notification’s play a random farmyard noise, i.e from a selection of - cow.wav dogbrk.wav duck.wav goose.wav horse.wav lamb.wav monkey.wav – left field I know rooster.wav Imagine my amusement being able to periodically send out an update and watch our office (of about 60 people) turn into farm for a few seconds. I’ve messed up a few times, with people being interrupted on customer conference calls,  but people seem good humoured about it. (so far) Simple(ish) pleasures…

    Read the article

  • Getting Windows Azure SDK 1.1 To Talk To A Local DB

    - by Richard Jones
    Just found this, if you’re using Azure 1.1,  which you probably will be if yo'u’ve moved to Visual Studio 2010. To change the default database to something other than sqlexpress for Development Storage do this - Look at this - http://msdn.microsoft.com/en-us/library/dd203058.aspx At the bottom it states -   Using Development Storage with SQL Server Express 2008 By default the local Windows Group BUILTIN\Administrator is not included in the SQL Server sysadmin server role on new SQL Server Express 2008 installations.  Add yourself to the sysadmin role in order to use the Development Storage Services on SQL Server Express 2008.  See SQL Server 2008 Security Changes for more information. Changing the SQL Server instance used by Development Storage By default, the Development Storage will use the SQL Express instance.  This can be changed by calling “DSInit.exe /sqlinstance:<SQL Server instance>” from the Windows Azure SDK command prompt.

    Read the article

  • Unreal 3 Editor (Unreal Tournament 3) Why does the X Y Z translations now rotate along with my static meshes?

    - by Gareth Jones
    So I was making a map for UT3, using the Unreal 3 Editor provided, and all was going well. However I was doing some work with InterpActors and Vehicle Spawners, when I must have hit a key by mistake (or other wise somehow changed something) by mistake. Now the X Y Z translations that are used to move objects around in the editor will rotate along with the object (Ive put images down below to help show what I mean) - This is very annoying because it also changes the direction the arrow keys move a rotated object, in the example below, the Down arrow key will now move the object to the right. How can I fix this? (Note both images are taken from the same viewpoint) Before Rotation: After Rotation: P.S. If someone could please provide me with the correct / better name for the X Y Z "things" it would be much appreciated, thanks!

    Read the article

  • First Windows Phone 7 &ndash; Mobile LOB App.

    - by Richard Jones
    So I spent a couple of hours yesterday building my first Windows 7 Phone Series. application.   I still can’t get used to saying Windows 7 Phone Series.   So I think I’ll just go with WP7.   I must say I’m really impressed.    Calling web-services a breeze laying out User Interface a very straight forward.   I had made called into Dynamics NAV using web-services in under 10 minutes. Working in XAML takes a bit of getting used to,  I’m not trying to-do anything too clever yet. One thing I will point out to transition from one XAML page to the next use this.NavigationService.Navigate(new Uri("/Primary.xaml", UriKind.Relative)); Going from Compact Framework this is equivalent of a Window.Show Its so nice to be able to talk nicely again, about Windows Based Mobile Development!

    Read the article

  • Microsoft Certifications &ndash; how to prep? and why?

    - by Kelly Jones
    I often get asked by my colleagues, “how do you prepare for Microsoft exams?” Well, the answer for me is a little complicated, so I thought I’d write up here what I do. The first thing I do is go to Microsoft’s website to find the exam that I need to take.  If you’re looking to get a particular certification, then their site lists the exam or exams that you’ll need to pass.  If you’ve already taken an exam, you can log onto the MCP website and use their certification planner.  This little tool tells you what tests you need, based on the exams you’ve already passed.  It is very helpful with the certifications that are multiple tests and especially ones that have electives. Once you’ve identified the test, you can use Microsoft’s website to see the topics that it covers.  This is a good outline to follow when you study.  I’ll keep this handy to reference back throughout my studying to make sure that I’m covering all the topics I need to know. The next step is probably where I am a little different from others.  IF the exam outline covers material that I’ve already been working with, then I’ll skip a lot of the studying and go directly to the practice tests.  However, if I’m looking at the outline and wondering how in the world do you do that? – then it’s time to hit the books. So, where to find study materials?  Try typing in the exam number into any search engine.  You’ll typically find a ton of resources.  If you’re lucky, you’ll find books that others recommend based on their studying and exam experience.  As a Sogeti employee, I have access to three really good resources: an internal company list of all of the consultants who have passed particular tests (on our Connex website), Books 24x7, and Transcender practice exams. Once my studying is done (either through books or experience), I’ll go through the practice exams.  I find them really helpful in getting my knowledge lined up to the thinking process that the exam writers use.  If I’m relying on my experience, then this really helps me to identify gaps in my knowledge that I’ll need to fill. That’s about it.  If I’m doing ok on the practice exams, then I’ll take the real thing.  I’ve found that the practice exams are usually more difficult than then real thing. Oh – one other thing I do related to Microsoft exams – I try to take any beta exams that Microsoft makes available that fall into my skill set.  Microsoft has started a blog to announce these and the seats usually fill up really quick.  The blog is at http://blogs.technet.com/betaexams/ . You don’t get your results instantly, like a normal exam, instead you have to wait for everyone to finish taking the beta exams and for Microsoft to determine which questions they are using and which they are dropping.  So, be prepared to wait six to eight weeks for your results.

    Read the article

  • How can I get Xubuntu's workspace switcher to show the proper sized desktop?

    - by Jennifer Jones
    I recently installed the Xubuntu desktop over my Ubuntu 12.04 installation, and have been customizing it to my liking. My laptop comes to class with me, and when it's not out of the room, it's usually connected to a second monitor. When I am not connected to the monitor, the workspace switcher shows that my windows are all inside about half of each workspace, but there's only one monitor connected. I'm looking for a way to get it to show workspaces that are reminiscent of this monitor's size and nothing more. Is there a way to fix that? I can attach a picture of the problem if necessary.

    Read the article

  • How to write your unit tests to switch between NUnit and MSTest

    - by Justin Jones
    On my current project I found it useful to use both NUnit and MsTest for unit testing. When using ReSharper for running unit tests, it just simply works better with NUnit, and on large scale projects NUnit tends to run faster. We would have just simply used NUnit for everything, but MSTest gave us a few bonuses out of the box that were hard to pass up. Namely code coverage (without having to shell out thousands of extra dollars for the privilege) and integrated tests into the build process. I’m one of those guys who wants the build to fail if the unit tests don’t pass. If they don’t pass, there’s no point in sending that build on to QA. So making the build work with MsTest is easiest if you just create a unit test project in your solution. This adds the right references and project type Guids in the project file so that everything just automagically just works. Then (using NuGet of course) you add in NUnit. At the top of your test file, remove the using statements that refer to MsTest and replace it with the following: #if NUNIT using NUnit.Framework; #else using TestFixture = Microsoft.VisualStudio.TestTools.UnitTesting.TestClassAttribute; using Test = Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute; using TestFixtureSetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using SetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using Microsoft.VisualStudio.TestTools.UnitTesting; #endif Basically I’m taking the NUnit naming conventions, and redirecting them to MsTest. You can go the other way, of course. I only chose this direction because I had already written the tests as NUnit tests. NUnit and MsTest provide largely the same functionality with slightly differing class names. There’s few actual differences between then, and I have not run into them on this project so far. To run the tests as NUnit tests, simply open up the project properties tab and add the compiler directive NUNIT. Remove it, and you’re back in MsTest land.

    Read the article

  • Windows 7 Virtual PC - &ldquo;RPC server unavailable&rdquo;

    - by Kelly Jones
    I use Windows 7 Virtual PC on my current project and I often bring home the files, so I can work some in the evenings.  Since my VHDs are large, I’ll only copy the undo disks, saved state, and virtual machine config files from my external drive.  I copy them to a small portable drive and once I get home, I’ll copy them to a large external drive. I’ve done this for over a year, but recently I started getting an error when I tried to start the VPC after the copying was finished.  It would open the initial window with the progress bar, but eventually the bar would stop, turn red, and then the error “RPC server unavailable” would appear.  When I first started seeing these, I’d try again, but no luck. After some testing, it turns out that my small portable drive is apparently going bad, so it was corrupting the files.  Lucky for me, that I never overwrote my good copies with corrupted copies, at least not at both the office and at home.

    Read the article

  • Installing wireless drivers without internet access [closed]

    - by Lucas Jones
    Possible Duplicate: How can I install and download drivers without internet? (This is related to my other question; my approach there didn't work.) My friend has (I'm quite sure) a Broadcom wireless chipset. However, he doesn't have any wired internet access on the machine, so his only option is to boot into Windows (he is using Wubi) and download packages there. This means we can't use the Hardware Drivers dialog to install the drivers. He can't fetch the repository information, so the Broadcom driver packages aren't showing up in Synaptic. Is there any way to get Wi-Fi working?

    Read the article

  • tic tac toe game ai as3

    - by David Jones
    I'm looking into creating a simple tic tac toe/noughts and crosses game in actionscript3 and am trying to understand the ideas behind the ai used in a game like this. I've seen some simplistic examples online but from what I've read a game tree or something like minimax is the best way to go about this. Can anyone help explain or reference any good examples of this? I've seen that there is a library called as3ds - data structures for game developers which has a number of classes that might help tie this together? Any info/examples or help is much appreciated

    Read the article

  • My 5th App

    - by Richard Jones
    So, I’ve just completed my 5th commercial iPhone App.   Always when I move to a new programming language I take a test application and port it to learn.   So my equivalent of “Hello World” app.  is - http://itunes.apple.com/gb/app/iching-master/id424495901?mt=8 I built this, as an app about a year ago,  but figured that it worked so well on iOS that I would get it published. Technorati Tags: I-Ching,iChing,iPhone,iTunes

    Read the article

  • Wireless USB Adapter driver configuration

    - by Jones
    First I would like agree one thing that this is fully for Ubuntu, not for BackTrack. However I'm posting my BackTrack question here since I find similar problems in backtrack forum are being unanswered and Linux/Unix site of StackExchange has very low attraction. I'm having USB wifi adapter (iBall Baton), having chipset RTL8191SU. It displays available wifi networks in Wicd manager. But while trying to connect it gives me bad password. When I'm trying to run airmon-ng it didn't returned me the monitor mode. I tried to replace with Ubuntu 12.10 drivers. But it totally nulled the device functionality. However I restored the drivers. I'm interested to compile drivers as if anyone indicate the methods. Awaiting ideas!

    Read the article

  • Licensing a collaborative research project

    - by Marcus Jones
    I am involved with an international research project which involves many different universities, national labs, and companies. The project is developed by national grants and in-kind support. One task in the project is to develop code to streamline workflow in our domain (energy simulation) by scripting common pre- and post-processing tasks for different tools. We want this code to be freely distributable to the simulation community. How can we ensure that this effort is digestible by the legal departments of these different parties such that the people involved can freely code?

    Read the article

  • team viewer 8 beta wont run

    - by Conner Jones
    I installed team viewer 7 and then one of my friends using windows got version 8 so I installed the beta of version 8 for linux. When I try to run it for terminal I get these errors i atempted to do as the comment bellow said and when trying to run teamveiwer i stil got an error conner@DemonicGrace:~$ teamviewer Init... Checking setup... Launching TeamViewer... wine: cannot find L"C:\windows\system32\winemenubuilder.exe" err:wineboot:ProcessRunKeys Error running cmd L"C:\windows\system32\winemenubuilder.exe -a -r" (2) err:winedevice:ServiceMain driver L"MountMgr" failed to load err:secur32:SECUR32_initSchannelSP libgnutls not found, SSL connections will fail fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:ole:CoInitializeSecurity ((nil),-1,(nil),(nil),0,3,(nil),0,(nil)) - stub! fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:process:SetProcessShutdownParameters (00000100, 00000000): partial stub. fixme:resource:GetGuiResources (0xffffffff,0): stub fixme:win:EnumDisplayDevicesW ((null),0,0x32df64,0x00000000), stub! fixme:win:EnumDisplayDevicesW (L"\\.\DISPLAY1",0,0x32dc1c,0x00000000), stub! fixme:win:EnumDisplayDevicesW ((null),1,0x32df64,0x00000000), stub! please help me out if anyone has ideas im more than willing to listen

    Read the article

  • What is an effective way to convert a shared memory-mapped system to another data access model?

    - by Rob Jones
    I have a code base that is designed around shared memory. Each process that needs to access the memory maps it into its own address space. The data structures in the shared memory are directly accessed, that is, there is no API. For example: Assume the following: typedef struct { int x; int y; struct { int a; int b; } z; } myStruct; myStruct s; Then a process might access this structure as: myStruct *s = mapGlobalMem(); And use it as: int tmpX = s->x; The majority of the information in the global structure is configuration information that is set once and read many times. I would like to store this information in a database and develop an API to access the database. The problem is, these references are sprinkled throughout the code. I need a way to parse the code and identify global structure references that will need to be refactored. I've looked into using ANTLR to create a parser that will identify references to a small set of structures and enter them into a custom symbol table. I could then use this symbol table to identify which source files need to be refactored. It looks like a promising approach. What other approaches are there? Of course, I'm looking for a programmatic approach. There are far too many source files to examine each one visually. This is all ordinary ANSI C. Nothing else.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >