Search Results

Search found 1213 results on 49 pages for 'jeff weber'.

Page 11/49 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Podcast with AJI about iOS development coming from a .NET background

    - by Tim Hibbard
    I talked with Jeff and John from AJI Software the other day about developing for the iOS platform. We chatted about learning Xcode and Objective-C, provisioning devices and the app publishing process. We all have a .NET background and made lots of comparisons between the two platforms/ecosystems/fanbois. They even let me throw in a plug for Christian Radio Locator. Jeff was my first contact with the Kansas City .NET community. It was probably about 10 years ago. He pushed me to talk more (and rescued me from my first talk that bombed) and blog more. One time a group of us took a 16 hour car trip to South Carolina for a code camp and live podcasted the whole thing. Good times.Listen to the show Click here to subscribe to more AJI Reports in the future.

    Read the article

  • Brightness controls doesn't work on a MacBook Pro 5.5

    - by Jeff Labonte
    I recently installed Ubuntu on my MacBook pro 5.5 (mid 2009). I have a problem with the brightness control. The thing is, when I try to reduce the brightness of my display which would help my battery life dramaticlly is doesnt work. I tried to use the system preference but no succes. I tried to look of it changes something if disconnect the computer from the charge I the screen will dimm but once against I failed. I tried many things such as pommed or Many other little things that I have had read on forums.

    Read the article

  • open source database project

    - by Jeff V
    What is the best way to build an open source database? I would like to build a database of all vehicles and the related maintenance information (i.e Oil Weight, Quantity, Tire Pressure, Windshield wipers etc). Currently this information is fragmented or just not put on line in an open way. Once collection began I would want to import into a DB and then be able to distribute freely. Is there a process (site or group) that I can start gathering this information in a reliable and verifiable way? Is there any issues that I should watch out for?

    Read the article

  • How to diagnose Ubuntu CPU spikes / IO wait?

    - by Jeff Welling
    I'm using Ubuntu and every couple minutes it goes unresponsive for a half second to a full second, which isn't normally a problem but makes trying to code extremely frustrating when your trying to hit backspace or navigate the code and nothing is happening. The problem is, the freezes are so brief that top doesn't have time to show me what is spiking the CPU (assuming something is, but I don't know what else could cause this). Does anyone know how to troubleshoot this performance issue? Edit: I've tried login in with Gnome Classic (No Effects) instead of Unity but it still freezes up every once in awhile. Edit: The CPU graph doesn't seem to be showing any actual spikes so it seems you were right and my original diagnosis of CPU spikes being the problem was incorrect, I now suspect IO wait. I don't recall this happening for the brief few weeks I had Windows 7 Starter running on it though, which leads me to believe it isn't (just?) the hardware.. is there anything I can tweak to improve this? I'm using an Acer Aspire One D257, with Ubuntu 11.10. Edit: Output of dmesg is at http://paste.ubuntu.com/1060054/ and kern.log is at http://paste.ubuntu.com/1060055/

    Read the article

  • New Look for Geekswithblogs.net Homepage

    - by Jeff Julian
    I wanted to alert everyone to the new look of the Geekswithblogs.net Community Page.  I removed the tabs, cleaned up the posts and fonts, replaced the logo with our brighter logo, and mucked with the CSS and HTML to drive a smaller footprint.  With this update, the homepage is now HALF THE SIZE in KBs!  I still have some more AJAX calls I want to implement to make the footprint even smaller. Let me know what you think.  I feel it is easier to read through the posts now.

    Read the article

  • Bummer | Visual Studio 2012 Error on Web Publish&ndash;July Update

    - by Jeff Julian
    Always a bummer when you update a product and something stops working.  I am hoping it is an installation issue, but each time I go to run “Publish..” in my Web Application, the publish works, but Visual Studio 2012 crashes.  I just noticed this beginning after I ran the Visual Studio 2012 RC July Updates. Can someone else give it a go and see if they see the same problem?  I am using File System publishing. Technorati Tags: Visual Studio 2012 RC,Error

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Installing Ubuntu Server 12.04 as a software RAID 1 mirror fails to boot

    - by Jeff Atwood
    I'm installing a few new Ubuntu Server 12.04 LTS servers, and they have two 512 GB SSDs. I want them to use software RAID 1 mirroring, so I was following this document religiously step by step: https://help.ubuntu.com/12.04/serverguide/advanced-installation.html To summarize the above official documentation: to set up a software RAID 1 mirror in Ubuntu Server, you choose manual partitioning during the setup, and do this on each drive: "swap" partition of roughly RAM size "physical volume for RAID" partition for remaining drive size After that, you set up the RAID 1 mirror using the RAID partitions on drive A and B, make it ext4 and containing the root filesystem partition. Setup continues from there just fine. One caveat: I was completely unable to select the "physical volume for RAID" as bootable. When I tried to do that in setup, it had no effect: I could press enter on the "make bootable" option all day long and nothing would ever change. However, after install successfully completes, I have a big problem: the system won't boot! I get Reboot and Select proper boot device or Insert Boot Media in selected Boot device and press a key What did I do wrong? Why can't I mark that "physical volume for RAID" partition bootable during Ubuntu Server setup? Is there some way for me to make the physical volumes for RAID bootable after the fact, perhaps from a live CD or something?

    Read the article

  • Reasons for Pair Programming

    - by Jeff Langemeier
    I've worked in a few shops where management has passed the idea of pair programming either to me or another manager/developer, and I can't get behind it at all. From a developer stand-point I can't find a reason why moving to this coding style would be beneficial, nor as a manager of a small team have I seen any benefit. I understand that it helps on basic syntax errors and can be helpful if you need to hash something out, but managers that are out of the programming loop seem to keep seeing it as a way of keeping their designers from going to Facebook or Reddit than as a design tool. As someone close to the development floor that apparently can't quite understand from a book tossed my way or a wiki page on the subject... from a high level management position, what are the benefits of Pair Programming when dealing with Scrum or Agile environments?

    Read the article

  • Ubuntu 11.10 momentarily freezes every couple minutes

    - by Jeff Welling
    I'm working on an Acer Aspire One D257 running Ubuntu 11.10, and every few minutes the laptop freezes up for a second or two, before becoming responsive again. This doesn't really present a huge problem, but when I'm trying to code all of a sudden vim stops responding for a couple seconds while I'm trying to modify my code, it's increasingly frustrating. The odd part is, if I play a movie in VLC at the same time, when vim pauses the video keeps playing just fine (VLC's priority is not modified). I'm wondering if anyone knows why this may be happening, or in lieu of that, how I might be able to track down the source of my frustrating pauses. Normally I would try using top, but the pauses are shorter than 2 seconds so nothing unusual shows up in top as consuming the CPU. Does anyone know how to troubleshoot intermittent repeating 1 second long pauses in vim?

    Read the article

  • Non-Full-Screen Application Launcher in Gnome 3?

    - by Jeff
    I'm trying out Gnome Shell in 11.04. You can push the "activities" key to get into the overview where you can launch applications. I like the idea of the overview as an aid in switching focus, but it's too slow for just launching an application. Is Gnome Shell going to implement, or does it have, a way to launch applications (besides the alt-f2 command launcher) that is as quick as Gnome-do? edit: To be clear, I'm aware of several Gnome application launchers. I'm curious about the Gnome Shell and any packages it includes (not add-ons like Gnome-do or Synapse).

    Read the article

  • 11.10 had sound, 12.04 doesnt. Acer Aspire One D257

    - by Jeff Welling
    I'm wondering if anyone else has had this same problem, because if so it might be worth while for me to file a bug, but it might also be chaos monkey so I wanted to check first. I have an Acer Aspire One D257 and the sound worked in 11.10 by default, but after doing a fresh install of 12.04 there is no sound. There used to be a speaker icon on the menu bar but now there isn't, and volume up and volume down hotkeys now do nothing, which leads me to believe it isn't detecting the sound card properly anymore. Googling for sound problems on an Acer Aspire One D257 on 12.04 isn't showing any helpful results, so I'm wondering if it's just me. Does anyone have an AAO D257 on 12.04 with the sound working, and if so did you have to do anything special to make it work?

    Read the article

  • What are options for 3rd Party Centralized Software Settings Management?

    - by Jeff Martin
    I am an architect in an enterprise looking to build a SaaS solution. Our products are distributed over many different deployable containers, Web Services, Web UI's, etc. I am looking for some open-source or 3rd party software solution to manage the settings of our application. These would be similar to the settings you might find in Word or Eclipse or Visual Studio. The settings would control various behaviors and features of the product. (Probably not settings like which database to connect to but more like, should I show line numbers on the page or not by default..). Ideally, we would be able to store values for different dimensions (by tenant, by user, by application environment... ) Because we have so many different deployables, I am looking for a centralized solution that can provide a web service that each of the deployables can get their individual settings from. Does anyone know of a centralized service providing this sort of features or give me some help in searching for an alternative to rolling our own?

    Read the article

  • Portable class libraries and fetching JSON

    - by Jeff
    After much delay, we finally have the Windows Phone 8 SDK to go along with the Windows 8 Store SDK, or whatever ridiculous name they’re giving it these days. (Seriously… that no one could come up with a suitable replacement for “metro” is disappointing in an otherwise exciting set of product launches.) One of the neat-o things is the potential for code reuse, particularly across Windows 8 and Windows Phone 8 apps. This is accomplished in part with portable class libraries, which allow you to share code between different types of projects. With some other techniques and quasi-hacks, you can share some amount of code, and I saw it mentioned in one of the Build videos that they’re seeing as much as 70% code reuse. Not bad. However, I’ve already hit a super annoying snag. It appears that the HttpClient class, with its idiot-proof async goodness, is not included in the Windows Phone 8 class libraries. Shock, gasp, horror, disappointment, etc. The delay in releasing it already caused dismay among developers, and I’m sure this won’t help. So I started refactoring some code I already had for a Windows 8 Store app (ugh) to accommodate the use of HttpWebRequest instead. I haven’t tried it in a Windows Phone 8 project beyond compiling, but it appears to work. I used this StackOverflow answer as a starting point since it’s been a long time since I used HttpWebRequest, and keep in mind that it has no exception handling. It needs refinement. The goal here is to new up the client, and call a method that returns some deserialized JSON objects from the Intertubes. Adding facilities for headers or cookies is probably a good next step. You need to use NuGet for a Json.NET reference. So here’s the start: using System.Net; using System.Threading.Tasks; using Newtonsoft.Json; using System.IO; namespace MahProject {     public class ServiceClient<T> where T : class     {         public ServiceClient(string url)         {             _url = url;         }         private readonly string _url;         public async Task<T> GetResult()         {             var response = await MakeAsyncRequest(_url);             var result = JsonConvert.DeserializeObject<T>(response);             return result;         }         public static Task<string> MakeAsyncRequest(string url)         {             var request = (HttpWebRequest)WebRequest.Create(url);             request.ContentType = "application/json";             Task<WebResponse> task = Task.Factory.FromAsync(                 request.BeginGetResponse,                 asyncResult => request.EndGetResponse(asyncResult),                 null);             return task.ContinueWith(t => ReadStreamFromResponse(t.Result));         }         private static string ReadStreamFromResponse(WebResponse response)         {             using (var responseStream = response.GetResponseStream())                 using (var reader = new StreamReader(responseStream))                 {                     var content = reader.ReadToEnd();                     return content;                 }         }     } } Calling it in some kind of repository class may look like this, if you wanted to return an array of Park objects (Park model class omitted because it doesn’t matter): public class ParkRepo {     public async Task<Park[]> GetAllParks()     {         var client = new ServiceClient<Park[]>(http://superfoo/endpoint);         return await client.GetResult();     } } And then from inside your WP8 or W8S app (see what I did there?), when you load state or do some kind of UI event handler (making sure the method uses the async keyword): var parkRepo = new ParkRepo(); var results = await parkRepo.GetAllParks(); // bind results to some UI or observable collection or something Hopefully this saves you a little time.

    Read the article

  • Should I use parentheses in logical statements even where not necessary?

    - by Jeff Bridgman
    Let's say I have a boolean condition a AND b OR c AND d and I'm using a language where AND has a higher order of operation precedent than OR. I could write this line of code: If (a AND b) OR (c AND d) Then ... But really, that's equivalent to: If a AND b OR c AND d Then ... Are there any arguments in for or against including the extraneous parentheses? Does practical experience suggest that it is worth including them for readability? Or is it a sign that a developer needs to really sit down and become confident in the basics of their language?

    Read the article

  • Geekswithblogs.net | Screen Resolutions of our Readers

    - by Jeff Julian
    Yesterday I talked about the Browsers we see being used by our readers driven off of our Google Analytics traffic and today I want to share with you the Screen Resolutions we see.  As a web developer most of my life, it is hard to decide how large you should build your application because typically you have a couple huge high resolution monitors on your desk, but you typical end user is thought to have 1024x768.  With HTML5/CSS3 out, it is a little better coming up with a design that will scale to all resolutions, but it is still nice to know the numbers when it comes to how much real estate do I have on my clients. If you look at these numbers for Geekswithblogs.net, we have a lot of high resolution monitors from users that visit the site.  After a little more investigation of the number you will notice we do not have as much height available as we do width.  If the primary goal of a site is to deliver as much data in the viewable area without scrolling, this becomes a challenge when most of our pages have long pieces of formatted data.  So our challenge is to build skins that use up more of the sides of the content toward the top on larger resolution browsers and then entice the reader to scroll to get the goodies embedded in the content of the posts.  Going to be an interesting battle for sure, but we really need more skin offerings on the site. Technorati Tags: Resolution Statistics,Geekswithblogs.net

    Read the article

  • Software Center doesn't ask for a password anymore

    - by Jeff
    So, out of the blue, software-center stopped asking me for a password. It just runs, and then turns grey. Works fine as root, or with sudo. While investigating, I found out about polkit (new to me), and looked at the policies, which seem fine. Looking under localauthority, however, showed that while the sub-directories (10-, 20-, 30-, 50-, 90-) are there, there aren't any files under those. Is that my problem? Should there be a file in the 50-local.d? Or am I still looking in the wrong place for my problem? I looked for similar questions and looked at the answers, but they don't really help any. One other thing, I'm not sure it's related but seemed to happen about the same time: The Dash Home only shows items for recent files and downloads. Nothing anywhere else anymore.

    Read the article

  • Dropping the full-time high-pay gig - I need help choosing a smart path that I can rely on to produce enough to survive comfortably ($2,500 per month)

    - by Jeff V
    I have about 6 years of full time experience developing web applications and tools. I know perl, python, PHP, ruby, and a good deal of SQL and relational theory. I have never had to choose a self-employed path as I have always had full time work or a bank account (credit cards) to support a big project. I'm planning to move out of the country to an area that will not offer local employment, and need some advice on what to focus on. I want to move in no more than six months, I have enough savings to live for an additional six months, but I would like to conserve it as much as possible. I enjoy taking risks, so I'm not looking for discussion of whether this is a good idea or not. I want advice on the most reliable solution given my skill set. Some paths I'm considering: Learn objective-c and build quality Apple software. Develop subscription based web tools for SEO, or other Marketing applications Attempt to acquire freelance projects by developing a reputation within open source projects, freelancer.com, and other online communities The last time I left my job, I was building a startup (that went under), and missed out living in a beautiful place due to the amount of time I worked. I would like to work 30-40 hours per week max. I can dedicate 10-15 hours per week while at my current job to prepare and learn. A preemptive thanks for the advice...

    Read the article

  • Are there any adverse side effects to loading html5shiv in every browser?

    - by Jeff
    On the html5shiv Google Code page the example usage includes an IE conditional: <!--[if lt IE 9]> <script src="dist/html5shiv.js"></script> <![endif]--> However on the html5shiv github page, the description explains: This script is the defacto way to enable use of HTML5 sectioning elements in legacy Internet Explorer, as well as default HTML5 styling in Internet Explorer 6 - 9, Safari 4.x (and iPhone 3.x), and Firefox 3.x. An obvious contradiction. So to satisfy my curiosity, for anyone who has studied the code, are there any adverse side affects to loading html5shiv in every browser (without the IE conditional)? EDIT: My goal, obviously, is to use the shiv without the IE conditional.

    Read the article

  • Should accessible members of an internal class be internal too?

    - by Jeff Mercado
    I'm designing a set of APIs for some applications I'm working on. I want to keep the code style consistent in all the classes I write but I've found that there are a few inconsistencies that I'm introducing and I don't know what the best way to resolve them is. My example here is specific to C# but this would apply to any language with similar mechanisms. There are a few classes that I need for implementation purposes that I don't necessarily want to expose in the API so I make them internal whereever needed. Generally what I would do is design the class as I normally would (e.g., make members public/protected/private where necessary) and change the visibility level of the class itself to internal. So I might have a few classes that look like this: internal interface IMyItem { ItemSet AddTo(ItemSet set); } internal class _SmallItem : IMyItem { private readonly /* parameters */; public _SmallItem(/* small item parameters */) { /* ... */ } public ItemSet AddTo(ItemSet set) { /* ... */ } } internal abstract class _CompositeItem: IMyItem { private readonly /* parameters */; public _CompositeItem(/* composite item parameters */) { /* ... */ } public abstract object UsefulInformation { get; } protected void HelperMethod(/* parameters */) { /* ... */ } } internal class _BigItem : _CompositeItem { private readonly /* parameters */; public _BigItem(/* big item parameters */) { /* ... */ } public override object UsefulInformation { get { /* ... */ } } public ItemSet AddTo(ItemSet set) { /* ... */ } } In another generated class (part of a parser/scanner), there is a structure that contains fields for all possible values it can represent. The class generated is internal too but I have control over the visibility of the members and decided to make them internal as well. internal partial struct ValueType { internal string String; internal ItemSet ItemSet; internal IMyItem MyItem; } internal class TokenValue { internal static int EQ(ItemSetScanner scanner) { /* ... */ } internal static int NAME(ItemSetScanner scanner, string value) { /* ... */ } internal static int VALUE(ItemSetScanner scanner, string value) { /* ... */ } //... } To me, this feels odd because the first set of classes, I didn't necessarily have to make some members public, they very well could have been made internal. internal members of an internal type can only be accessed internally anyway so why make them public? I just don't like the idea that the way I write my classes has to change drastically (i.e., change all uses of public to internal) just because the class is internal. Any thoughts on what I should do here? It makes sense to me that I might want to make some members of a class declared public, internal. But it's less clear to me when the class is declared internal.

    Read the article

  • Can a site recover by itself after dropping google page rank for 404 errors?

    - by Jeff
    Recently redid a website and changed the directory / URL structure. I did some .htaccess redirects for the main landing pages - however when reviewing web master tools received 404 errors for the rest of the changed URLs and noticed that Google dropped my site from the #1 position to around the 5th page. I corrected all the 404s by providing redirects in the .htaccess, resubmitted the site map and tested the google crawl bot. Will my page regain its rank by itself - or am I going to have to put some time into like I originally did?

    Read the article

  • The best computer ever

    - by Jeff
    (This is a repost from my personal blog… wow… I need to write more technical stuff!) About three years and three months ago, I bought a 17" MacBook Pro, and it turned out to be the best computer I've ever owned. You might think that every computer with better specs is automatically better than the last, but that hasn't been my experience. My first one was a Sony, back in the Pentium III days, and it cost an astonishing $2,500. That was even more ridiculous in 1999 dollars. It had a dial-up modem, and a CD-ROM, built-in! It may have even played DVD's. A few years later I bought an HP, and it ended up being a pile of shit. The power connector inside came loose from the board, and on occasion would even short. In 2005, I bought a Dell, and it wasn't bad. It had a really high resolution screen (complete with dead pixels, a problem in those days), and it was the first laptop I felt I could do real work on. When 2006 rolled around, Apple started making computers with Intel CPU's, and I bought the very first one the week it came out. I used Boot Camp to run Windows. I still have it in its box somewhere, and I used it for three years. The current 17" was new in 2009. The goodness was largely rooted in having a big screen with lots of dots. This computer has been the source of hundreds of blog posts, tens of thousands of lines of code, video and photo editing, and of course, a whole lot of Web surfing. It connected to corpnet at Microsoft, WiFi in Hawaii and has presented many a deck. It has traveled with me tens of thousands of miles. Last year, I put a solid state drive in it, and it was like getting a new computer. I can boot up a Windows 7 VM in about 19 seconds. Having 8 gigs of RAM has always been fantastic. Everything about it has been fast and fun. When new, the battery (when not using VM's) could get as much as 10 hours. I can still do 7 without much trouble. After 460 charge cycles, the battery health is still between 85 and 90%. The only real negative has been the size and weight. It's only an inch thick, but naturally it's pretty big with a 17" screen. You don't get battery life like that without a huge battery, either, so it's heavy. It was never a deal breaker, but sometimes a long haul across a large airport, you know you're carrying it. Today, Apple announced a new, thinner and lighter 15" laptop, with twice the RAM and CPU cores, and four times the screen resolution. It basically handles my size and weight issues while retaining the resolution, and it still costs less than my 17" did. So I ordered one. Three years is an excellent run, but I kind of budgeted for a new workhorse this year anyway. So if you're interested in a 17" MacBook Pro with a Core 2 Duo 2.66 GHz CPU, 8 gigs of RAM and a 320 gig hard drive (sorry, I'm keeping the SSD), I have one to sell. They've apparently discontinued the 17", which is going to piss off the video community. It's in excellent condition, with a few minor scratches, but I take care of my stuff.

    Read the article

  • Creating an Ubuntu live USB for use with Gparted

    - by Jeff
    I've install Ubuntu 12.04 on my Windows 7 Dell laptop. Recently I discovered that I'm running out of space on my Ubuntu partition, and I would like to enlarge it. Is it safe to resize partitions while they're in use e.g. when I'm logged into Ubuntu? If so, I've ran into this problem when I run GParted: It seems as if my hard drive is one big, NTFS partition, like the Ubuntu partition doesn't exist. Is it possible Ubuntu runs off the NTFS partition, sharing it with Windows? What should I do?

    Read the article

  • After Upgrading to 12.04 the Kernel won't Initiate

    - by Jeff
    I had 11.10 and tried to upgrade recently, via the installer pop-up reminder. Afterwards it would not boot, citing an issue with the kernel. So I've setup a 12.04 installation USB, which appears to work fine. The problem is it doesn't provide an upgrade option, just format and install or install alongside the current broken kernel. I believe I should be able to still get the information from the broken OS after installing alongside, but if there is a way to fix this more directly that would be preferable.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >