Search Results

Search found 1735 results on 70 pages for 'joe king'.

Page 5/70 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • PASS Summit 2010 Recap

    - by AjarnMark
    Last week I attended my eighth PASS Summit in nine years, and every year it is a fantastic event!  I was fortunate my first year to have a contact (Bill Graziano (blog | Twitter) from SQLTeam) that I was expecting to meet, and who got me started on a good track of making new contacts.  Each year I have made a few more, and renewed friendships from years past.  Many of the attendees agree that the pure networking opportunities are one of the best benefits of attending the Summit.  And there’s a lot of great technical stuff, too, some of the things that stick out for me this year include… Pre-Con Monday: PowerShell with Allen White (blog | Twitter).  This was the first time that I attended a pre-con.  For those not familiar with the concept, the regular sessions for the conference are 75-90 minutes long.  For an extra fee, you can attend a full-day session on a single topic during a pre- or post-conference training day.  I had been meaning for several months to dive in and learn PowerShell, but just never seemed to find (or make) the time for it, so when I saw this was one of the all-day sessions, and I was planning to be there on Monday anyway, I decided to go for it.  And it was well worth it!  I definitely came out of there with a good foundation to build my own PowerShell scripts, plus several sample scripts that he showed which already cover the first four or five things I was planning to do with PowerShell anyway.  This looks like the right tool for me to build an automated version of our software deployment process, which right now contains many repeated steps.  Thanks Allen! Service Broker with Denny Cherry (blog | Twitter).  I remembered reading Denny’s blog post on Using Service Broker instead of Replication, and ever since then I have been thinking about using this to populate a new reporting-focused Data Repository that we will be building in the near future.  When I saw he was doing this session, I thought it would be great to get more information and be able to ask the author questions.  When I brought this idea back to my boss, he really liked it, as we had previously been discussing doing nightly data loads, with an option to manually trigger a mid-day load if up-to-the-minute data was needed for something.  If we go the Service Broker route, we can keep the Repository current in near real-time.  Hooray! DBA Mythbusters with Paul Randal (blog | Twitter).  Even though I read every one of the posts in Paul’s blog series of the same name, I had to go see the legend in person.  It was great, and I still learned something new! How to Conduct Effective Meetings with Joe Webb (blog | Twitter).  I always like to sit in on a session that Joe does.  I met Joe several years ago when both he and Bill Graziano were on the PASS Board of Directors together, and we have kept in touch.  Joe is very well-spoken and has great experience with both SQL Server and business.  And we could certainly use some pointers at my work (probably yours, too) on making our meetings more effective and to run on-time.  Of course, now that I’m the Chapter Leader for the Professional Development virtual chapter, I also had to sit in on this ProfDev session and recruit Joe to do a presentation or two for the chapter next year. Query Optimization with David DeWitt.  Anyone who has seen Dr. David DeWitt present the 3rd keynote at a PASS Summit over the last three years knows what a great time it is to sit and listen to him make some really complicated and advanced topic easy to understand (although it still makes your head hurt).  It still amazes me that the simple two-table join query from pubs that he used in his example can possibly have 22 million possible physical query plans.  Ouch! Exhibit Hall:  This year I spent more serious time in the exhibit hall than any year past.  I have talked my boss into making a significant (for us) investment in monitoring tools next year, and this was a great opportunity to talk with all the big-hitters.  Readers of mine may recall that I fell in love with the SQL Sentry Power Suite several months ago and wrote a blog entry about it just from the trial version.  Well as things turned out, short-term budget priorities shifted, and we weren’t able to make the purchase then.  I have it in the budget for next year, but since I was going to the Summit, my boss wanted me to look at the other options to see if this was really the one that we wanted.  I spent a couple of hours talking with representatives from Red-Gate, Idera, Confio, and Quest about their offerings, and giving them each the same 3 scenarios that I wanted to be able to accomplish based on the questions and issues that arise in our company.  It was interesting to discover the different approaches or “world view” that each vendor takes to the subject of performance monitoring and troubleshooting.  I may write a separate article that goes into this in more depth, but the product that best aligned with our point of view, and met the current needs we have is still the SQL Sentry Power Suite.  I’m not saying that the others are bad or wrong or anything like that, just that the way they tackled the issue did not align as well with our particular needs as does SQL Sentry’s product.  And that was something I learned too, when you go shopping for these products, you really need to know what you want to get from them.  It’s best if you have a few example scenarios from work that you can use to test out how well each tool fits your particular needs. Overall, another GREAT event.  I can’t wait to get the DVDs so I can sit in on a bunch of other sessions that I couldn’t get to because I was in one of the ones above.  And I can hardly wait until next year!

    Read the article

  • Is it Hard to Write a Blog?

    - by Joe Mayo
    Responding to a tweet I received, asking if I found it hard to write a blog and keep it interesting. This is one of the situations where a 140 character response doesn’t do a question justice. There’s a lot to think about between the subjects of writing, subject matter, and entertainment.  Here’s my take on each of these three topics: There’s all types of writing you can do with various degrees of difficulty. If you’re writing a book and you have a gazillion editors bleeding over your every utterance, then the task becomes harder because you’re second-guessing yourself, not knowing whose opinion will be violated. However, if you’re communicating in a public forum, not too many people care about the grammar as much as whether what you have to say is correct.  For a blog, I would say it’s somewhere in-between.  Right now, I’m using Windows Live Writer, which gives me a few advantages to just typing into the blog editor, such as spelling correction and the ability to save my work and resume later.  Overall, writing is one of those things that you just need to get used to.  It’s an essential skill for developers because you need to document your work, depending on what your definition of proper documentation is, and communicate with other developers via various communications mediums. Not begin good (or not thinking that you’re good) shouldn’t hold you back.  Like most things in life, practice will improve your skill.  So, push away that inner voice that keeps you from moving forward and just do it. A good grasp on the subject matter you’re writing about helps.  However, don’t let a lack of knowledge stop you from writing about something. I recall reading something a while back by a developer who didn’t know a technology but wrote about their experience in learning it. They ended up learning more by expressing their thoughts in writing. If you look around out many blogs today, there are many items written by developers learning what they’re writing about.  So, whether you are sure or unsure, you can still write – just be honest with yourself and your readers about what you’re writing. Also, don’t be afraid to have a different opinion or worry if someone will disagree.  I’ll freely admit that it took a while for me to become accustomed to being criticized. Take the good with the bad and use the bad to make yourself better. Guaranteed, someone will disagree with one or more parts of what I’ve written here or think they have a better approach. No problem, more power to them, and whatever constructive comments they have will be a benefit to me in the future; Otherwise, to h*ll with them. :)  Every time you get knocked down, get right back up, dust the dirt off your backside, and keep moving forward.  You’ll learn in time how to align a subject with your own presentation of the material. Entertainment could be hard or could be natural, depending on the personality of yourself and your target audience. It’s even more challenging because you can say something you think is funny and someone will be offended. In fact, there are a lot of things that you shouldn’t say in the name of a joke, but I won’t mention any of them here for want of not offending anyone. Of course, I probably offended someone by saying that and there is probably an organization somewhere in the world out to get me now. I’m probably not the best person to be giving you advice on entertaining an audience.  I mean, every time I try to tell a joke on Twitter 10 people unfriend me. Okay, maybe 15, but you get my point. One thing you might be interested in knowing is that it’s not too hard for one technical person to entertain other technical people, especially when the subject is of interest.  It’s the excitement in each sentence and passion in each paragraph that will keep another developer entertained and interested in what you have to say. Not everyone will like what you’ve written, but the important part is to find your own voice and it’s likely that there is one person in some corner of the world that likes what you have to say, even if it’s your mom and she doesn’t understand a single word you write. :)   If I could leave you with one final thought; Just do it and don’t let anyone or anything hold you back.   Joe

    Read the article

  • A Gentle Introduction to NuGet

    - by Joe Mayo
    Not too long ago, Microsoft released, NuGet, an automated package manager for Visual Studio.  NuGet makes it easy to download and install assemblies, and their references, into a Visual Studio project.  These assemblies, which I loosely refer to as packages, are often open source, and include projects such as LINQ to Twitter. In this post, I'll explain how to get started in using NuGet with your projects to include: installng NuGet, installing/uninstalling LINQ to Twitter via console command, and installing/uninstalling LINQ to Twitter via graphical reference menu. Installing NuGet The first step you'll need to take is to install NuGet.  Visit the NuGet site, at http://nuget.org/, click on the Install NuGet button, and download the NuGet.Tools.vsix installation file, shown below. Each browser is different (i.e. FireFox, Chrome, IE, etc), so you might see options to run right away, save to a location, or access to the file through the browser's download manager.  Regardless of how you receive the NuGet installer, execute the downloaded NuGet.Tools.vsix to install Nuget into visual Studio. The NuGet Footprint When you open visual Studio, observe that there is a new menu option on the Tools menu, titled Library Package Manager; This is where you use NuGet.  There are two menu options, from the Library Package Manager Menu that you can use: Package Manager Console and Package Manager Settings.  I won't discuss Package Manager Settings in this post, except to give you a general idea that, as one of a set of capabilities, it manages the path to the NuGet server, which is already set for you. Another menu, added by the NuGet installer, is Add Library Package Reference, found by opening the context menu for either a Solution Explorer project or a project's References folder or via the Project menu.  I'll discuss how to use this later in the post. The following discussion is concerned with the other menu option, Package Manager Console, which allows you to manage NuGet packages. Gettng a NuGet Package Selecting Tools -> Library Package Manager -> Package Manager Console opens the Package Manager Console.  As you can see, below, the Package Manager Console is text-based and you'll need to type in commands to work with packages. In this post, I'll explain how to use the Package Manager Console to install LINQ to Twitter, but there are many more commands, explained in the NuGet Package Manager Console Commands documentation.  To install LINQ to Twitter, open your current project where you want LINQ to Twitter installed, and type the following at the PM> prompt: Install-Package linqtotwitter If all works well, you'll receive a confirmation message, similar to the following, after a brief pause: Successfully installed 'linqtotwitter 2.0.20'. Successfully added 'linqtotwitter 2.0.20' to NuGetInstall. Also, observe that a reference to the LinqToTwitter.dll assembly was added to your current project. Uninstalling a NuGet Package I won't be so bold as to assume that you would only want to use LINQ to Twitter because there are other Twitter libraries available; I recommend Twitterizer if you don't care for LINQ to Twitter.  So, you might want to use the following command at the PM> prompt to remove LINQ to Twitter from your project: Uninstall-Package linqtotwitter After a brief pause, you'll see a confirmation message similar to the following: Successfully removed 'linqtotwitter 2.0.20' from NuGetInstall. Also, observe that the LinqToTwitter.dll assembly no longer appears in your project references list. Sometimes using the Package Manager Console is required for more sophisticated scenarios.  However, LINQ to Twitter doesn't have any dependencies and is a very simple install, so you can use another method of installing graphically, which I'll show you next. Graphical Installations As explained earlier, clicking Add Library Package Reference, from the context menu for either a Solution Explorer project or a project's References folder or via the Project menu opens the Add Library Package Reference window. This window will allow you to add a reference a NuGet package in your project. To the left of the window are a few accordian folders to help you find packages that are either on-line or already installed.  Just like the previous section, I'll assume you are installing LINQ to Twitter for the first time, so you would select the Online folder and click All.  After waiting for package descriptions to download, you'll notice that there are too many to scroll through in a short period of time, over 900 as I write this.  Therefore, use the search box located at the top right corner of the window and type LINQ to Twitter as I've done in the previous figure. You'll see LINQ to Twitter appear in the list. Click the Install button on the LINQ to Twitter entry. If the installation was successful, you'll see a message box display and disappear quickly (or maybe not if your machine is very fast or you blink at that moment). Then you'll see a reference to the LinqToTwitter.dll assembly in your project's references list. Note: While running this demo, I ran into an issue where VS had created a file lock on an installation folder without releasing it, causing an error with "packagename already exists. Skipping..." and then an error describing that it couldn't write to a destination folder.  I resolved the problem by closing and reopening VS. If you open the Add a Library Package Reference window again, you'll see LINQ to Twitter listed in the Recent packages folder. Summary You can install NuGet via the on-line home page with a click of a button.  Nuget provides two ways to work with packages, via console or graphical window.  While the graphical window is easiest, the console window is more powerful. You can now quickly add project references to many available packages via the NuGet service. Joe

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • A Basic Thread

    - by Joe Mayo
    Most of the programs written are single-threaded, meaning that they run on the main execution thread. For various reasons such as performance, scalability, and/or responsiveness additional threads can be useful. .NET has extensive threading support, from the basic threads introduced in v1.0 to the Task Parallel Library (TPL) introduced in v4.0. To get started with threads, it's helpful to begin with the basics; starting a Thread. Why Do I Care? The scenario I'll use for needing to use a thread is writing to a file.  Sometimes, writing to a file takes a while and you don't want your user interface to lock up until the file write is done. In other words, you want the application to be responsive to the user. How Would I Go About It? The solution is to launch a new thread that performs the file write, allowing the main thread to return to the user right away.  Whenever the file writing thread completes, it will let the user know.  In the meantime, the user is free to interact with the program for other tasks. The following examples demonstrate how to do this. Show Me the Code? The code we'll use to work with threads is in the System.Threading namespace, so you'll need the following using directive at the top of the file: using System.Threading; When you run code on a thread, the code is specified via a method.  Here's the code that will execute on the thread: private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written."); } The call to Thread.Sleep(1000) delays thread execution. The parameter is specified in milliseconds, and 1000 means that this will cause the program to sleep for approximately 1 second.  This method happens to be static, but that's just part of this example, which you'll see is launched from the static Main method.  A thread could be instance or static.  Notice that the method does not have parameters and does not have a return type. As you know, the way to refer to a method is via a delegate.  There is a delegate named ThreadStart in System.Threading that refers to a method without parameters or return type, shown below: ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); I'll show you the whole program below, but the ThreadStart instance above goes in the Main method. The thread uses the ThreadStart instance, fileWriterHandlerDelegate, to specify the method to execute on the thread: Thread fileWriter = new Thread(fileWriterHandlerDelegate); As shown above, the argument type for the Thread constructor is the ThreadStart delegate type. The fileWriterHandlerDelegate argument is an instance of the ThreadStart delegate type. This creates an instance of a thread and what code will execute, but the new thread instance, fileWriter, isn't running yet. You have to explicitly start it, like this: fileWriter.Start(); Now, the code in the WriteFile method is executing on a separate thread. Meanwhile, the main thread that started the fileWriter thread continues on it's own.  You have two threads running at the same time. Okay, I'm Starting to Get Glassy Eyed. How Does it All Fit Together? The example below is the whole program, pulling all the previous bits together. It's followed by its output and an explanation. using System; using System.Threading; namespace BasicThread { class Program { static void Main() { ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); Thread fileWriter = new Thread(fileWriterHandlerDelegate); Console.WriteLine("Starting FileWriter"); fileWriter.Start(); Console.WriteLine("Called FileWriter"); Console.ReadKey(); } private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written"); } } } And here's the output: Starting FileWriter Called FileWriter File Written So, Why are the Printouts Backwards? The output above corresponds to Console.Writeline statements in the program, with the second and third seemingly reversed. In a single-threaded program, "File Written" would print before "Called FileWriter". However, this is a multi-threaded (2 or more threads) program.  In multi-threading, you can't make any assumptions about when a given thread will run.  In this case, I added the Sleep statement to the WriteFile method to greatly increase the chances that the message from the main thread will print first. Without the Thread.Sleep, you could run this on a system with multiple cores and/or multiple processors and potentially get different results each time. Interesting Tangent but What Should I Get Out of All This? Going back to the main point, launching the WriteFile method on a separate thread made the program more responsive.  The file writing logic ran for a while, but the main thread returned to the user, as demonstrated by the print out of "Called FileWriter".  When the file write finished, it let the user know via another print statement. This was a very efficient use of CPU resources that made for a more pleasant user experience. Joe

    Read the article

  • Windows Phone 8 Launch Event Summary

    - by Tim Murphy
    Today was the official coming out party for Windows Phone 8.  Below is a summary of the launch event.  There is a lot here to stay with me. They started with a commercial staring Joe Belfiore show how his Windows Phone 8 was personal too him which highlights something I think Microsoft has done well over the last couple of event: spotlight how Windows Phone is a different experience from other smartphones.  Joe actually called iPhone and Android “tired old metaphors" and explained that the idea around Windows Phone was to “reinvent the smartphone around you” as “the most personal smartphone operating system”.  The is the message that they need to drive home in their adds. The only real technical aspect we found out was that they have optimized the operating system around the dual core Qualcomm Snapdragon chip set.  It seems like all of the other hardware goodies had already been announced.  The remainder of the event was centered around new features of the OS and app announcements. So what are we getting?  The integrated features included lock screen live tile, Data Sense, Rooms and Kids corner.  There wasn’t a lot of information about it, but Joe also talked about apps not just having live tiles, but being live apps that could integrate with wallet and the hub. The lock screen will now be able to be personalized with live tile data or even a photo slide show.  This gives the lock screen an even better ability to give you the information you want to know before you even unlock the phone. The Kids Corner allows you as a parent to setup an area on your phone that you kids can go into an use it without disturbing your apps.  They can play games or use apps that you have designated and will only see those apps.  It even has a special lock screen gesture just for the kids corner. Rooms allow you to organize your phone around the groups of people in your life.  You get a shared calendar, a room wall as well as shared notes beyond just being able to send messages to a group.  You can also invite people not on the Windows Phone platform to access an online version of the room. Data Sense is a new feature that gives you better control and understanding of your data plan usage.  You can see which applications are using data and it can automatically adjust they way your phone behaves as you get close to your data limit. Add to these features the fact that the entire Windows ecosystem is integrated with SkyDrive and you have an available anywhere experience that is unequaled by any other platform.  Your document, photos and music are available on your Windows Phone, Window 8 device and Xbox.  SkyDrive also doesn’t limit how long you can keep files like the competing cloud platforms and give more free storage. It was interesting the way they made the launch event more personal.  First Joe brought out his own kids to demo the Kids Corner.  They followed this up by bringing out Jessica Alba to discuss her experience on the Windows Phone 8.  They need to keep putting a face on the product instead of just showing features as a cold list. Then we get to apps.  We knew that the new Skype was coming, but we found out that it was created in such a way that it can receive calls without running consistently in the background which would eat up battery.  This announcement was follow by the coming Facebook app that is optimized for Windows Phone 8.  As a matter of fact they indicated that just after launch the marketplace would have 46 out of the top 50 apps used by all smartphone platforms.  In a rational world this tide with over 120,000 apps currently in the marketplace there should be no more argument about the Windows Phone ecosystem. For those of us who develop for Windows Phone and weren’t on the early adoption program will finally get access to the SDK tomorrow after an announcement at Build (more waiting).  Perhaps we will get a few new features then. In the end I wouldn’t say there were any huge surprises, but I am really excited about getting my hands on the devices next month and starting to develop.  Stay tuned. del.icio.us Tags: Windows Phone,Windows Phone 8,Winodws Phone 8 Launch,Joe Belfiore,Jessica Alba

    Read the article

  • google docs + web app

    - by King
    Hi Guys I am trying to create a web app to share docs with all editor features (just like google docs). My main requirements for this app are as follows: 1. Should have all editor features (can be done using open office api, google docs api, Microsoft office web apps api) 2. Should be shared between multiple users and can be edited by multiple users and other sync features (can be done using google docs api, Microsoft office web apps) 3. Can save the document created and edited on my own/ custom server addr. (Which api can support this??? I know open office can support this) Guys can you please suggest me one api which can be used to do all the above. Also please suggest if I am underestimating any API above regarding any functionality that i thing is not supported. Thanks King

    Read the article

  • How do I convert my matrix from OpenGL to Marmalade?

    - by King Snail
    I am using a third party rendering API, Marmalade, on top of OpenGL code and I cannot get my matrices correct. One of the API's authors states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Which user account should be used for WSGIDaemonProcess?

    - by Nathan S
    I have some Django sites deployed using Apache2 and mod_wsgi. When configuring the WSGIDaemonProcess directive, most tutorials (including the official documentation) suggest running the WSGI process as the user in whose home directory the code resides. For example: WSGIScriptAlias / /home/joe/sites/example.com/mod_wsgi-handler.wsgi WSGIDaemonProcess example.com user=joe group=joe processes=2 threads=25 However, I wonder if it is really wise to run the wsgi daemon process as the same user (with its attendant privileges) which develops the code. Should I set up a service account whose only privilege is read-only access to the code in order to have better security? Or are my concerns overblown?

    Read the article

  • Node.js MMO - process and/or map division

    - by Gipsy King
    I am in the phase of designing a mmo browser based game (certainly not massive, but all connected players are in the same universe), and I am struggling with finding a good solution to the problem of distributing players across processes. I'm using node.js with socket.io. I have read this helpful article, but I would like some advice since I am also concerned with different processes. Solution 1: Tie a process to a map location (like a map-cell), connect players to the process corresponding to their location. When a player performs an action, transmit it to all other players in this process. When a player moves away, he will eventually have to connect to another process (automatically). Pros: Easier to implement Cons: Must divide map into zones Player reconnection when moving into a different zone is probably annoying If one zone/process is always busy (has players in it), it doesn't really load-balance, unless I split the zone which may not be always viable There shouldn't be any visible borders Solution 1b: Same as 1, but connect processes of bordering cells, so that players on the other side of the border are visible and such. Maybe even let them interact. Solution 2: Spawn processes on demand, unrelated to a location. Have one special process to keep track of all connected player handles, their location, and the process they're connected to. Then when a player performs an action, the process finds all other nearby players (from the special player-process-location tracking node), and instructs their matching processes to relay the action. Pros: Easy load balancing: spawn more processes Avoids player reconnecting / borders between zones Cons: Harder to implement and test Additional steps of finding players, and relaying event/action to another process If the player-location-process tracking process fails, all other fail too I would like to hear if I'm missing something, or completely off track.

    Read the article

  • Snow Leopard .htaccess issue

    - by Joe.Cianflone
    Hello, I'm running into an issue on Snow Leopard. I am just using the standard Apache2 that came with it but it doesn't seem to want to use my .htaccess file. Here is the appropriate part of my httpd.conf file: <Directory /> Options FollowSymLinks AllowOverride All AuthConfig Order deny,allow Deny from all </Directory> And here is my .htaccess file: Options +FollowSymlinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] I'm sure I'm doing something stupid, but at this point, I just cant see it! All it's doing is allowing me to not have the index.php file, this worked on Leopard and isn't working in Snow Leopard. What am I missing? Thanks Joe C

    Read the article

  • Databinding to an Entity Framework in WPF

    - by King Chan
    Is it good to use databinding to Entity Framework's Entity in WPF? I created a singleton entity framework context: To have only one connection and it won't open and close all the time. So I can pass the Entity around to any class, and can modify the Entity and make changes to the database. All ViewModels getting the entity out from the same Context and databinding to the View saves me time from mapping new object, but now I imagine there is problem in not using the newest Context: A ViewModel databinding to a Entity, then someone else updated the data. The ViewModel will still display the old data, because the Context is never being dispose to refresh. I always create new Context and then dispose of it. If I want to pass the Entity around, then there will be conflicts between Context and Entity. What is the suggested way of doing this ?

    Read the article

  • Should each app have its own database, or should small apps be merged into one?

    - by King
    We have a bunch of small to medium sized apps, each of which has its own database (MSSQL Server). There was a suggestion that we consoldate the 'related' databases into a smaller set amount of larger databases. They don't particularly share a lot of data, they would just be under a similar business group. For example, using a 'Finance' DB to hold the tables and procedures for finance apps. Would it be appropriate to use a different schema for each app? E.g. App1.SomeTable App1.SomeOtherTable AppTwo.SomeTable What are the pros and cons of this approach? What should I watch out for? Thanks

    Read the article

  • What do these "Cron Daemon" email errors mean?

    - by Meltemi
    Anyone know what this means? Getting one of these every minute in one user's inbox: From: Cron Daemon <[email protected]> Subject: Cron <joe@mail> /tmp/.d/update >/dev/null 2>&1 To: [email protected] Received: from murder ([unix socket]) by mail.domain.com (Cyrus v2.2.12-OS X 10.3) with LMTPA; Tue, 04 May 2010 10:35:00 -0700 shell-init: could not get current directory: getcwd: cannot access parent directories: Permission denied job-working-directory: could not get current directory: getcwd: cannot access parent directories: Permission denied

    Read the article

  • Aggregating Excel cell contents that match a label [migrated]

    - by Josh
    I'm sure this isn't a terribly difficult thing, but it's not the type of question that easily lends itself to internet searches. I've been assigned a project for work involving a complex spreadsheet. I've done the usual =SUM and other basic Excel formulas, and I've got enough coding background that I'm able to at least fudge my way through VBA, but I'm not certain how to proceed with one part of the task. Simple version: On Sheet 1 I have a list of people (one on each row, person's name in column A), on sheet 2 I have a list of groups (one on each row, group name in column A). Each name in Sheet 1 has its own row, and I have a "Data Validation" dropdown menu where you choose the group each person belongs to. That dropdown is sourced from Sheet 2, where each group has a row. So essentially the data validation source for Sheet 1's "Group" column is just "=Sheet2!$a1:a100" or whatever. The problem is this: I want each group row in Sheet 2 to have a formula which results in a list of all the users which have been assigned to that group on Sheet 1. What I mean is something the equivalent of "select * from PeopleTab where GROUP = ThisGroup". The resulting cell would just stick the names together like "Bob Smith, Joe Jones, Sally Sanderson" I've been Googling for hours but I can't think of a way to phrase my search query to get the results I want. Here's an example of desired result (Dash-delimited. Can't find a way to make it look nice, table tags don't seem to work here): (Sheet 1) Bob Smith - Group 1 (selected from dropdown) Joe Jones - Group 2 (selected from dropdown) Sally Sanderson - Group 1 (selected from dropdown) (Sheet 2) Group 1 - Bob Smith, Sally Sanderson (result of formula) Group 2 - Joe Jones (result of formula) What formula (or even what function) do I use on that second column of sheet 2 to make a flat list out of the members of that group?

    Read the article

  • FFmpeg not recording audio during screen capture

    - by King
    I'm using the script below to run FFmpeg on Ubuntu 10.10. I followed these instructions to install FFmpeg & x264. While ffmpeg does capture the screen it does not capture the mic audio. I've checked that the mic works via "System Preferences". Anyone have any ideas on what the problem(s) could be and suggestions on how to resolve this issue? Thanks. ffmpeg -f alsa -ac 2 -i hw:0,0 -f x11grab -r 30 -s $(xwininfo -root | grep 'geometry' | awk '{print $2;}') -i :0.0 -acodec pcm_s16le -vcodec libx264 -vpre lossless_ultrafast -threads 0 -y screen-capture.mkv

    Read the article

  • Programatically clicking a HTML button by vb.net [closed]

    - by Chauhdry King
    I have to click a HTML button programatically which is on the 3rd page of the website. The button is without id. It has just name type and value. The HTML code of the button is given below <FORM NAME='form1' METHOD='post' action='/dflogin.php'> <INPUT TYPE='hidden' NAME='txtId' value='E712050-15'><INPUT TYPE='hidden' NAME='txtassId' value='1'><INPUT TYPE='hidden' NAME='txtPsw' value='HH29'><INPUT TYPE='hidden' NAME='txtLog' value='0'><h6 align='right'><INPUT TYPE='SUBMIT' NAME='btnSub' value='Next' style='background-color:#009900; color:#fff;'></h6></FORM> I am using the following code to click it Dim i As Integer Dim allButtons As HtmlElementCollection allButtons = WebBrowser1.Document.GetElementsByTagName("input") i = 0 For Each webpageelement As HtmlElement In allButtons i += 1 If i = 5 Then webpageelement.InvokeMember("click") End If Next But I am not able to click it. I am using the vb.net 2008 platform. Can anyone tell me the solution to click it?

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • View matrix question (rotate by 180 degrees)

    - by King Snail
    I am using a third party rendering API on top of OpenGL code and I cannot get my matrices correct. The API states this: We're right handed by default, and we treat y as up by convention. Since IwGx's coordinate system has (0,0) as the top left, you typically need a 180 degree rotation around Z in your view matrix. I think the viewer does this by default. In my OpenGL app I have access to the view and projection matrices separately. How can I convert them to fit the criteria used by my third party rendering API? I don't understand what they mean to rotate 180 degrees around Z, is that in the view matrix itself or something in the camera before making the view matrix. Any code would be helpful, thanks.

    Read the article

  • Need suggestion for Mutiple Windows application design

    - by King Chan
    This was previously posted in StackOverflow, I just moved to here... I am using VS2008, MVVM, WPF, Prism to make a mutiple window CRM Application. I am using MidWinow in my MainWindow, I want Any ViewModel would able to make request to MainWindow to create/add/close MidChildWindow, ChildWindow(from WPF Toolkit), Window (the Window type). ViewModel can get the DialogResult from the ChildWindow its excutes. MainWindow have control on all opened window types. Here is my current approach: I made Dictionary of each of the windows type and stores them into MainWindow class. For 1, i.e in a CustomerInformationView, its CustomerInformationViewModel can execute EditCommand and use EventAggregator to tell MainWindow to open a new ChildWindow. CustomerInformationViewModel: CustomerEditView ceView = new CustomerEditView (); CustomerEditViewModel ceViewModel = CustomerEditViewModel (); ceView.DataContext = ceViewModel; ChildWindow cWindow = new ChildWindow(); cWindow.Content = ceView; MainWindow.EvntAggregator.GetEvent<NewWindowEvent>().Publish(new WindowEventArgs(ceViewModel.ViewModeGUID, cWindow )); cWindow.Show(); Notice that all my ViewModel will generates a Guid for help identifies the ChildWindow from MainWindow's dictionary. Since I will only be using 1 View 1 ViewModel for every Window. For 2. In CustomerInformationViewModel I can get DialogResult by OnClosing event from ChildWindow, in CustomerEditViewModel can use Guid to tell MainWindow to close the ChildWindow. Here is little question and problems: Is it good idea to use Guid here? Or should I use HashKey from ChildWindow? My MainWindows contains windows reference collections. So whenever window close, it will get notifies to remove from the collection by OnClosing event. But all the Windows itself doesn't know about its associated Guid, so when I remove it, I have to search for every KeyValuePair to compares... I still kind of feel wrong associate ViewModel's Guid for ChildWindow, it would make more sense if ChildWindow has it own ID then ViewModel associate with it... But most important, is there any better approach on this design? How can I improve this better?

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • How to detect which edges of a rectange touch when they collide in iOS

    - by Mike King
    I'm creating a basic "game" in iOS 4.1. The premise is simple, there is a green rectangle ("disk") that moves/bounces around the screen, and red rectangle ("bump") that is stationary. The user can move the red "bump" by touching another coordinate on the screen, but that's irrelevant to this question. Each rectangle is a UIImageView (I will replace them with some kind of image/icon once I get the mechanics down). I've gotten as far as detecting when the rectangles collide, and I'm able to reverse the direction of the green "disk" on the Y axis if they do. This works well when the green "disk" approaches the red "bump" from top or bottom, it bounces off in the other direction. But when it approaches from the side, the bounce is incorrect; I need to reverse the X direction instead. Here's the timer I setup: - (void)viewDidLoad { xSpeed = 3; ySpeed = -3; gameTimer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(mainGameLoop:) userInfo:nil repeats:YES]; [super viewDidLoad]; } Here's the main game loop: - (void) mainGameLoop:(NSTimer *)theTimer { disk.center = CGPointMake(disk.center.x + xSpeed, disk.center.y + ySpeed); // make sure the disk does not travel off the edges of the screen // magic number values based on size of disk's frame // startAnimating causes the image to "pulse" if (disk.center.x < 55 || disk.center.x > 265) { xSpeed = xSpeed * -1; [disk startAnimating]; } if (disk.center.y < 55 || disk.center.y > 360) { ySpeed = ySpeed * -1; [disk startAnimating]; } // check to see if the disk collides with the bump if (CGRectIntersectsRect(disk.frame, bump.frame)) { NSLog(@"Collision detected..."); if (! [disk isAnimating]) { ySpeed = ySpeed * -1; [disk startAnimating]; } } } So my question is: how can I detect whether I need to flip the X speed or the Y speed? ie: how can I calculate which edge of the bump was collided with?

    Read the article

  • How to get password prompt from scp when launched remotely via ssh

    - by Zek
    When I ssh to a remote system and execute scp, I do not get a password prompt: # ssh 192.168.1.32 "scp joe\@192.168.1.31:/etc/hosts /tmp" Permission denied, please try again. Permission denied, please try again. Permission denied (publickey,password,keyboard-interactive). If I break it up like this, it works fine: # ssh 192.168.1.32 # scp joe\@192.168.1.31:/etc/hosts /tmp [email protected]'s password: How can I make it prompt me for the password in the first example above? Note: No, I cannot use key-based authentication for this.

    Read the article

  • Algorithmic Forecasting and Pattern Recognition

    - by Ryan King
    Say a user could enter project data into my software. Each project has 2 variables "size" and "work" and they're related but the relationship is not known. Is there a way to programmatically determine the relationship between the variables based on previous data and forecast the amount of work provided if only given the size of the project in the future? For Example, say the user had manually entered the following projects. Project 1 - Size:1, Work: 4 Project 2 - Size:2, Work: 7 Project 3 - Size:3, Work: 10 Project 4 - Size:4, Work: x What should I look into to be able to programmatically determine, that Work = Size*3+1 and therefor be able to say that x=13?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >