Search Results

Search found 19606 results on 785 pages for 'the thing'.

Page 86/785 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Using the ASP.NET Membership API with SQL Server / SQL Azure: The new &ldquo;System.Web.Providers&rdquo; namespace

    - by Harish Ranganathan
    The Membership API came in .NET 2.0 and was a huge enhancement in building web applications with users, managing roles, permissions etc.,  The Membership API by default uses SQL Express and until Visual Studio 2008, it was available only through the ASP.NET Configuration manager screen (Website – ASP.NET Configuration) or (Project – ASP.NET Configuration) and for every application, one has to manually visit this place to start using the Security and other settings.  Upon doing that the default SQL Express database aspnet.mdf is created to store all the user profiles. Starting Visual Studio 2010 and .NET 4.0, the Default Website template includes the Membership API controls as a part of the page i.e. When you create a “File – New – ASP.NET Web Application” or an “ASP.NET MVC Application”, by default the Login/Register controls are enabled in the MasterPage and they are termed under “ApplicationServices” setting in the web.config file with connection string pointed to the SQL Express database. In fact, when you run the default website and click on “Logon” –> “Register”, and enter the details for registration and click “Register”, that is the time the aspnet.mdf file is created with the tables for Users, Roles, UsersInRoles, Profile etc., Now, this uses the default SQL Express database within the App_Data folder.  If you want to move your Membership information to some other database such as SQL Server, SQL CE or SQL Azure, you need to manually run the aspnet_regsql command and specify the destination database name. This would create all the Tables, Procedures and Views required to handle the Membership information.  Thereafter you can change the connection string for “ApplicationServices” to point to the database where you had run all the scripts. Now, enter “System.Web.Providers” Alpha. This is available as a part of the NuGet package library.  Scott Hanselman has a neat post describing the steps required to get it up and running as well as doing the basic changes  at http://www.hanselman.com/blog/IntroducingSystemWebProvidersASPNETUniversalProvidersForSessionMembershipRolesAndUserProfileOnSQLCompactAndSQLAzure.aspx Pretty much, it covers what the new System.Web.Providers do. One thing I wanted to clarify is that, the new “System.Web.Providers” add a lot of new settings which are also marked as the defaults, in the web.config.  Even now, they use SQL Express as the default database.  But, if you change the connection string for “DefaultConnection” under connectionStrings to point to your SQL Server or SQL Azure, Membership API would now be able to create all the tables, procedures and views at the destination specified (i.e. SQL Server or SQL Azure). In my case, I modified the DefaultConneciton to point to my SQL Azure database.  Next, I hit F5 to run the application.  The default view loads.  I clicked on “LogOn” and then “Register” since I knew there are no tables/users as of then.  One thing to note is that, I had put “NewDB” as the database name in the connection string that points to SQL Azure.  NewDB wasn’t existing and I would assume it would be created before the tables/views/procedures for Membership are created. Once I clicked on the “Register” to register my first username, it took a while and then registered as well as logged in me in.  Also, I went to the SQL Azure Management Portal and verified that there exists “NewDB” which has just been created I could also connect to the SQL Azure database “NewDB” from Management Studio and found that the tables now don’t have the aspnet_ prefix.  The tables were simply Users, Roles, UsersInRoles, Profiles etc., So, with a few clicks and configuration change, I could actually set up the user base for my application on SQL Azure and even make the SessionState, Roles, Profiles being stored in SQL Azure database. The new System.Web.Proivders also required MARS (MultipleActiveResultSets=true) setting since it uses Entity Framework for the DAL operations.  Also, the “Project – ASP.NET Configuration” screen can be used to further create/manage users/roles etc., although the data is stored on the remote database. With that, a long pending request from the community to have the ability to configure and use remote databases for Application users management without having to run the scripts from SQL Express is fulfilled. Cheers !!!

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him.

    Read the article

  • Dude, what’s up with POP Forums vNext?

    - by Jeff
    Yeah, it has been awhile. I posted v9.2 back in January, about five months ago. That’s a real change from the release pace I had there for awhile. Let me explain what’s going on. First off, in the interim, I re-launched CoasterBuzz, which required a lot of my attention for about two of those months. That’s a good thing though, because that site is just about the best test bed I could ask for. The other thing is that I committed to make the next version use ASP.NET MVC 4, which is now at the RC stage. I didn’t think much about when they’d hit their RTW point, but RC is good enough for me. To that end, there is enough change in the next version that I recently decided to make it a major version upgrade, and finish up the loose ends and science projects to make it whole. Here’s what’s in store… Mobile views: I sat on this or a long time. Originally, I was going to use jQuery Mobile, and waited and waited for a new release, but in the end, decided against using it. Sometimes buttons would unexplainably not work, I felt like I was fighting it at times, and the CSS just felt too heavy. I rolled my own mobile sugar at a fraction of the size, and I think you’ll find it easy to modify. And it’s Metro-y, of course! Re-do of background services: A number of things run in the background, and I did quite a bit of “reimagining” of that code. It’s the weirdness of running services in a Web site context, because so many folks can’t run a bona fide service on their host’s box. The biggest change here is that these service no longer start up by default. You’ll need to call a new method from global.asax called PopForumsActivation.StartServices(). This is also a precursor to running the app in a Web farm (new data layer and caching is the second part of that). I learned about this the hard way when I had three apps using the forum library code but only one was actually the forum. The services were all running three times as often with race conditions and hits on the same data. That was particularly bad for e-mail. CSS clean up: It’s still not ideal, but it’s getting better. That’s one of those things that comes with integrating to a real site… you discover all of the dumb things you did. The mobile CSS is particularly easier to live with. Bug fixes: There are a whole lot of them. Most were minor, but it’s feeling pretty solid now. So that’s where I am. I’m going to call it v10.0, and I’m going to really put forth some effort toward finishing the mobile experience and getting through the remaining bugs. The roadmap beyond that will likely not be feature oriented, but rather work on some other things, like making it run in Azure, perhaps using SQL CE, a better install experience, etc. As usual, I’ll post the latest here. Stay tuned!

    Read the article

  • Building a Mafia&hellip;TechFest Style

    - by David Hoerster
    It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.)  TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference. We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do. Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball. Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity. Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck. Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off. I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

    Read the article

  • Wipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy Way

    - by The Geek
    Giving a computer to somebody else? Maybe you’re putting it out on Craigslist to sell to a stranger—either way, you’ll want to make sure that your drive is completely wiped, scrubbed, and clean of any personal data. Here’s the easy way to do it. If you only have access to an Ubuntu Live CD or thumb drive, you can actually use that instead if you prefer, and we’ve got you covered with a full guide to securely wiping your PC’s hard drive. Otherwise, keep reading. Wipe the Drive with DBAN Darik’s Boot and Nuke CD is the easiest way to permanently and totally destroy every bit of personal information on that drive—nobody is going to recover a thing once this is done. The first thing you’ll need to do is download a copy of the ISO image, and then burn it to a blank CD with something really useful like Imgburn. Just choose Burn image to Disc at the start screen, select the little file icon, grab the downloaded ISO, and then go. If you need a little more help, we’ve got you covered with a beginner’s guide to burning an ISO image. Once you’re done, stick the disc into the drive, start the PC up, and then once you boot to the DBAN prompt you’ll see a menu. You can pretty much ignore everything on here, and just type… autonuke And there you are, your disk is now being securely wiped. Once it’s all done, you can remove the CD, and then either pack the PC up to sell, or re-install Windows on there if you feel like it. More Advanced Method If you’re really paranoid, want to run a different type of wipe, or just like fiddling with the options, you can choose F3 or hit Enter at the prompt to head to the advanced selection screen. Here you can choose exactly which drive to wipe, or hit the M key to change the method. You’ll be able to choose between a bunch of different wipe options. The Quick Erase is all you really need though.   So there you are, easy PC wiping in one package. What about you? Do you make sure to wipe your old PCs before giving them away? Personally I’ve always just yanked out the hard drives before I got rid of an old PC, but that’s just me. Download DBAN from dban.org Similar Articles Productive Geek Tips Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveHow to Dispose of Old Computers ResponsiblyHow To Delete a VHD in Windows 7Speed up External USB Hard Drives in Windows VistaSpeed Up SATA Hard Drives in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Book Review: Programming Windows Identity Foundation

    - by DigiMortal
    Programming Windows Identity Foundation by Vittorio Bertocci is right now the only serious book about Windows Identity Foundation available. I started using Windows Identity Foundation when I made my first experiments on Windows Azure AppFabric Access Control Service. I wanted to generalize the way how people authenticate theirselves to my systems and AppFabric ACS seemed to me like good point where to start. My first steps trying to get things work opened the door to whole new authentication world for me. As I went through different blog postings and articles to get more information I discovered that the thing I am trying to use is the one I am looking for. As best security API for .NET was found I wanted to know more about it and this is how I found Programming Windows Identity Foundation. What’s inside? Programming WIF focuses on architecture, design and implementation of WIF. I think Vittorio is very good at teaching people because you find no too complex topics from the book. You learn more and more as you read and as a good thing you will find that you can also try out your new knowledge on WIF immediately. After giving good overview about WIF author moves on and introduces how to use WIF in ASP.NET applications. You will get complete picture how WIF integrates to ASP.NET request processing pipeline and how you can control the process by yourself. There are two chapters about ASP.NET. First one is more like introduction and the second one goes deeper and deeper until you have very good idea about how to use ASP.NET and WIF together, what issues you may face and how you can configure and extend WIF. Other two chapters cover using WIF with Windows Communication Foundation (WCF) band   Windows Azure. WCF chapter expects that you know WCF very well. This is not introductory chapter for beginners, this is heavy reading if you are not familiar with WCF. The chapter about Windows Azure describes how to use WIF in cloud applications. Last chapter talks about some future developments of WIF and describer some problems and their solutions. Most interesting part of this chapter is section about Silverlight. Who should read this book? Programming WIF is targeted to developers. It does not matter if you are beginner or old bullet-proof professional – every developer should be able to be read this book with no difficulties. I don’t recommend this book to administrators and project managers because they find almost nothing that is related to their work. I strongly recommend this book to all developers who are interested in modern authentication methods on Microsoft platform. The book is written so well that I almost forgot all things around me when I was reading the book. All additional tools you need are free. There is also Azure AppFabric ACS test version available and you can try it out for free. Table of contents Foreword Acknowledgments Introduction Part I Windows Identity Foundation for Everybody 1 Claims-Based Identity 2 Core ASP.NET Programming Part II Windows Identity Foundation for Identity Developers 3 WIF Processing Pipeline in ASP.NET 4 Advanced ASP.NET Programming 5 WIF and WCF 6 WIF and Windows Azure 7 The Road Ahead Index

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • P90X or How I Stopped Worrying and Love Exercise

    - by Matt Christian
    Last Wednesday, after many UPS delivery failures, I received P90X in the mail.  P90X is a series of DVD's and a nutrition guide you use to shed pounds and gain muscle.  Odds are you've seen the infomercial on TV at some point if you watch a little tube now and again.  I started last Thursday and am still standing to tell this tale. At it's core, P90X is a 12 DVD set of exercise videos.  Each video is comprised of a different workout routine that typically last around an hour (some up to 1 1/2 hours).  Every day you are supposed to do one of the workouts which are different every day (sometimes you may repeat a shorter 6 min workout dedicated to abs twice a week).  There are different 'programs' focused on different areas, for weight loss you do the Lean Program, standard weight loss and muscle gain do the Regular Program, and for those hardcore health-nuts, the Insane Program (which consists of 2 - 1 hour long exercises per day).  Each Program has a different set of workouts per week which you repeat for 3 weeks, followed by a 'Relaxation Week' which is essentially a slightly different order.  After the month of workouts is over, you've finished 1 phase out of 3.  P90X takes 90 days, split into 3 Phases (1 phase per month).  Every phase has a different workout order which is also focused on different areas (Weight Loss, Muscle Gain, etc...)  With the DVD's you also get a small glossy book of about 100 pages detailing the different workouts and the different programs as well as a sample workout to see if you're even ready to start P90X. The second part of P90X, which can also be considered the 'core' (actually the other half of the core) is the nutrition guide that is included.  The Nutrition Guide is a book similar to the one that defines the exercises (about 100 glossy pages) though it details foods you should eat, the amounts, and a number of healthy (and tasty!) recipes.  The guide is split up into 3 phases as well, promoting high protein and low carb/dairy at during Phase 1, and levelling off through to Phase 3 where you have a relatively balanced amount of every food group. So after 1 week where am I?  I've stuck quite close to the nutrition guide (there isn't 'diet food' in here people, it's ACTUALLY food) and done my exercise every day.  I think a lot of the first week is getting into the whole idea and learning the moves performed on the DVD.  Have I lost weight?  No.  Do I feel some definition already starting to poke out?  Absolutely (no pun intended). Tony Horton (the 51-year old hulk that runs the whole thing) is very fun to listen and work along with and the 'diet' really isn't too hard to follow unless all you eat is carbs.  I've tried the gym thing and could not get motivated enough to continue going.  P90X is the first time I've ached from a workout, BEFORE starting my next workout.  For anyone interested, Google 'P90X' or 'BeachBody' to find out more information about this awesome program!

    Read the article

  • SQL SERVER – Discard Results After Query Execution – SSMS

    - by pinaldave
    The first thing I do any day is to turn on the computer. Today I woke up and as soon as I turned on the computer I saw a chat message from a friend. He was a bit confused and wanted me to help him. Just as usual I am keeping the relevant conversation in focus and documenting our conversation as chat. Let us call him Ajit. Ajit: Pinal, every time I run a query there is no result displayed in the SSMS but when I run the query in my application it works and returns an appropriate result. Pinal:  Have you tried with different parameters? Ajit: Same thing. However, it works from another computer when I connect to the same server with the same query parameters? Pinal: What? That is new and I believe it is something to do with SSMS and not with the server. Send me screenshot please. Ajit: I believe so, let me send you a screenshot, Pinal: (looking at the screenshot) Oh man, there is no result-tab at all. Ajit: That is what the problem is. It does not have the tab which displays the result. This works just fine from another computer. Pinal: Have you referred Nakul’s blog post – SSMS – Query result options – Discard result after query executes, that talks about setting which can discard the query results after execution. (After a while) Ajit: I think it seems like on the computer where I am running the query my SSMS seems to have the option enabled related to discarding results. I fixed it by following Nakul’s blog post. Pinal: Great! Quite often I get the question what is the importance of the feature. Let us first see how to turn on or turn off this feature in SQL Server Management Studio 2012. In SSMS 2012 go to Tools >> Options >> Query Results > SQL Server >> Results to Grid >> Discard Results After Query Execution. When enabled this option will discard results after the execution. The advantage of disabling the option is that it will improve the performance by using less memory. However the real question is why would someone enable or disable the option. What are the cases when someone wants to run the query but do not care about the result? Matter of the fact, it does not make sense at all to run query and not care about the result. The matter of the fact, I can see quite a few reasons for using this option. I often enable this option when I am doing performance tuning exercise. During performance tuning exercise when I am working with execution plans and do not need results to verify every time or when I am tuning Indexes and its effect on execution plan I do not need the results. In this kind of situations I do keep this option on and discard the results. It always helps me big time as in most of the performance tuning exercise I am dealing with huge amount of the data and dealing with this data can be expensive. Nakul’s has done the experiment here already but I am going to repeat the same again using AdventureWorks Database. Run following T-SQL Script with and without enabling the option to discard the results. USE AdventureWorks2012 GO SELECT * FROM Sales.SalesOrderDetail GO 10 After enabling Discard Results After Query Execution After disabling Discard Results After Query Execution Well, this is indeed a good option when someone is debugging the execution plan or does not want the result to be displayed. Please note that this option does not reduce IO or CPU usage for SQL Server. It just discards the results after execution and a good help for debugging on the development server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Set Up Your Enterprise Social Organization

    - by Mike Stiles
    The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • Can't boot 12.04 installed alongside Windows 7

    - by PalaceChan
    I realize there are other questions like this one here, but I have visited them and tried several things and nothing is helping. One of them had a suggestion to boot the liveCD, and sudo mount /dev/sda* /mnt and to then chroot and reinstall grub. I did this and it did not help. Then on the Windows side, I downloaded a free version of easyBCD and chose to add a Grub2 Ubuntu 12.04 entry. On restart I saw this entry, but when I click on it it takes me to a Windows failed to boot error, as if it wasn't even trying to boot Ubuntu. I have booted from Ubuntu liveCD once again and have a snapshot of my GParted I ran this bootinfoscript thing from the liveCD, here are my results: It seems grub is on sda. I just want to be able to boot into my Ubuntu on startup. Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1041658947 of the same hard drive for core.img. core.img is at this location and looks for (,gpt7)/boot/grub on this drive. sda1: __________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /efi/Boot/bootx64.efi sda2: __________________________________________ File system: Boot sector type: - Boot sector info: Mounting failed: mount: unknown filesystem type '' sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda4: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda5: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /boot/bcd sda6: __________________________________________ File system: BIOS Boot partition Boot sector type: Grub2's core.img Boot sector info: sda7: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda7 and looks at sector 1046637581 of the same hard drive for core.img. core.img is at this location and looks for (,gpt7)/boot/grub on this drive. Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda8: __________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 1 1,465,149,167 1,465,149,167 ee GPT GUID Partition Table detected. Partition Start Sector End Sector # of Sectors System /dev/sda1 2,048 411,647 409,600 EFI System partition /dev/sda2 411,648 673,791 262,144 Microsoft Reserved Partition (Windows) /dev/sda3 673,792 533,630,975 532,957,184 Data partition (Windows/Linux) /dev/sda4 533,630,976 1,041,658,946 508,027,971 Data partition (Windows/Linux) /dev/sda5 1,412,718,592 1,465,147,391 52,428,800 Windows Recovery Environment (Windows) /dev/sda6 1,041,658,947 1,041,660,900 1,954 BIOS Boot partition /dev/sda7 1,041,660,901 1,396,174,572 354,513,672 Data partition (Windows/Linux) /dev/sda8 1,396,174,573 1,412,718,591 16,544,019 Swap partition (Linux) blkid output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 B498-319E vfat SYSTEM /dev/sda3 820C0DA30C0D92F9 ntfs OS /dev/sda4 168410AB84108EFD ntfs DATA /dev/sda5 AC7A43BA7A438056 ntfs Recovery /dev/sda7 42a5b598-4d8b-471b-987c-5ce8a0ce89a1 ext4 /dev/sda8 5732f1c7-fa51-45c3-96a4-7af3bff13278 swap /dev/sr0 iso9660 Ubuntu 12.04 LTS i386 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) =========================== sda7/boot/grub/grub.cfg: =========================== How can I get this option? When I was using easyBCD, it kept saying I had no entries at all, so I did the add entry thing for Ubuntu many times and I see several of those on boot screen now. I'd love to get rid of all those unusable options.

    Read the article

  • How to Set Up Your Enterprise Social Organization?

    - by Richard Lefebvre
    By Mike Stiles on Dec 04, 2012 The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • The busy developers guide to the Kinect SDK Beta

    - by mbcrump
    The Kinect is awesome. From day one, I’ve said this thing has got potential. After playing with several open-source Kinect projects, I am please to announce that Microsoft has released the official SDK beta on 6/16/2011. I’ve created this quick start guide to get you up to speed in no time flat. Let’s begin: What is it? The Kinect for Windows SDK beta is a starter kit for applications developers that includes APIs, sample code, and drivers. This SDK enables the academic research and enthusiast communities to create rich experiences by using Microsoft Xbox 360 Kinect sensor technology on computers running Windows 7. (defined by Microsoft) Links worth checking out: Download Kinect for Windows SDK beta – You can either download a 32 or 64 bit SDK depending on your OS. Readme for Kinect for Windows SDK Beta from Microsoft Research  Programming Guide: Getting Started with the Kinect for Windows SDK Beta Code Walkthroughs of the samples that ship with the Kinect for Windows SDK beta (Found in \Samples Folder) Coding4Fun Kinect Toolkit – Lots of extension methods and controls for WPF and WinForms. Kinect Mouse Cursor – Use your hands to control things like a mouse created by Brian Peek. Kinect Paint – Basically MS Paint but use your hands! Kinect for Windows SDK Quickstarts Installing and Using the Kinect Sensor Getting it installed: After downloading the Kinect SDK Beta, double click the installer to get the ball rolling. Hit the next button a few times and it should complete installing. Once you have everything installed then simply plug in your Kinect device into the USB Port on your computer and hopefully you will get the following screen: Once installed, you are going to want to check out the following folders: C:\Program Files (x86)\Microsoft Research KinectSDK – This contains the actual Kinect Sample Executables along with the documentation as a CHM file. Also check out the C:\Users\Public\Documents\Microsoft Research KinectSDK Samples directory: The main thing to note here is that these folders contain the source code to the applications where you can compile/build them yourself. Audio NUI DEMO Time Let’s get started with some demos. Navigate to the C:\Program Files (x86)\Microsoft Research KinectSDK folder and double click on ShapeGame.exe. Next up is SkeletalViewer.exe (image taken from http://www.i-programmer.info/news/91-hardware/2619-microsoft-launch-kinect-sdk-beta.html as I could not get a good image using SnagIt) At this point, you will have to download Kinect Mouse Cursor – This is really cool because you can use your hands to control the mouse cursor. I actually used this to resize itself. Last up is Kinect Paint – This is very cool, just make sure you read the instructions! MS Paint on steroids! A few tips for getting started building Kinect Applications. It appears WPF is the way to go with building Kinect Applications. You must also use a version of Visual Studio 2010.  Your going to need to reference Microsoft.Research.Kinect.dll when building a Kinect Application. Right click on References and then goto Browse and navigate to C:\Program Files (x86)\Microsoft Research KinectSDK and select Microsoft.Research.Kinect.dll. You are going to want to make sure your project has the Platform target set to x86. The Coding4Fun Kinect Toolkit really makes things easier with extension methods and controls. Just note that this is for WinForms or WPF. Conclusion It looks like we have a lot of fun in store with the Kinect SDK. I’m very excited about the release and have already been thinking about all the applications that I can begin building. It seems that development will be easier now that we have an official SDK and the great work from Coding4Fun. Please subscribe to my blog or follow me on twitter for more information about Kinect, Silverlight and other great technology.  Subscribe to my feed

    Read the article

  • C# Item system design approach, should I use abstract classes, interfaces or virutals?

    - by vexe
    I'm working on a Resident Evil 1/2/3/0/Remake type of game. Currently I've done a big part of the inventory system (here's a link if you wanna see my inventory, pretty outdated, added a lot of features and made a lot of enhancements) Now I'm thinking about how to approach the items system, If you've played any Resident Evil game or any of its likes you should be familiar with what I'm trying to achieve. Here's a very simple category I made for the items: So you have different items, with different operations you could perform on them, there are usable items that you could use, like for example herbs and first aid kits that 'using' them would affect your health, keys to unlock doors, and equipable items that you could 'equip' like weapons. Also, you can 'combine' two items together to get new one, like for example mixing a green and red herb would give you a new type of herb, or combining a lighter with a paper, would give you a burnt paper, or ammo with a gun, would reload the gun or something. etc. You know the usual RE drill. Not all items are 'transformable', in that, for example: lighter + paper = burnt paper (it's the paper that 'transforms' to burnt paper and not the lighter, the lighter is not transformable it will remain as it is) green herb + red herb = newHerb1 (both herbs will vanish and transform to this new type of herb) ammo + gun = reload gun (ammo state will remain as it is, it won't change but it will just decrease, nothing will happen to the gun it just gets reloaded) Also a key note to remember is that you can't just combine items randomly, each item has a 'mating' item(s). So to sum up, different items, and different operations on them. The question is, how to approach this, design-wise? I've been learning about interfaces, but it just doesn't quite get into my head, I mean, why not just use classes with the good old inheritance? I know the technical details of interfaces and that the cool thing about them is that they don't require an inheritance chain, but I just can't see how to use them properly, that is, if they were the right thing to use here. So should I go with just classes and inheritance? just like in the tree I showed you? or should I think more about how to use interfaces? (IUsable, IEquipable, ITransformable) - why not just use classes UsableItem, Equipable item, TransformableItem? I want something that won't give me headaches in the long run, something resilient/flexible to future changes. I'm OK using classes, but I smell something better here. The way I'm thinking is to possibly use both inheritance and interfaces, so that you have a branch like this: item - equipable - weapon. but then again, the weapon has methods like 'reload' 'examine' 'equip' some of them 'combine' so I'm thinking to make weapon implement ICombinable?... not all items get used the same way, using herbs will increase your health, using a key will open a door, so IUsable maybe? Should I use a big database (XML for example) for all the items, items names, mates, nRowsReq, nColsReq, etc? Thanks so much for your answers in advanced, note that demo 3 is coming after I'm done with items :D

    Read the article

  • Is SugarCRM really adequate for custom development (or adequate at all)? [closed]

    - by dukeofgaming
    Have you used SugarCRM for custom development successfully?, if so, have you done it programmatically or through the Module Builder? Were you successful? If not, why? I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server and other erros that I can't recall now. Two years after, I'm picking it up for a project once again. I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again). I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder. One reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out. then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again, then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. After further failed attempts to continue using this software I continued to stumble upon dead-ends when using the module editor, I could only recover from this errors by using version control. We are now moving on to a custom implementation using Symfony; perhaps if we were using it with its out-of-the-box modules we would have sticked with it.

    Read the article

  • Technology vs. Antiquated Methods

    - by AreYouSerious
    So Here I am talking with my Program lead, about technology, and how while my father is the VP of a major company, he still doesn't have a blackberry, or a smart phone. and I think it's funny. Most people would say it's a generational thing. That because he's older, he dosen't accept technology, and that's why. I have trouble swallowing that because this is the same man, who bought a satellite radio for his car, and made sure that the printer for the house was networked so that his and my mom's laptop could print wirelessly from the living room through their wireless network. I think it has to do with more with necessity, and partially with finical responsibility. My father is very financially conciencious. Think about it yourself. you pay for internet at your home. You have internet access at your office. But if you get a smart phone you're going to pay almost the same amount just for that access. A lot of people take it as just another fixed cost... I'm one of those. I don't even think about it, as I check my facebook from the bus, train, or even while sitting in traffic... The convience of having connection everywhere outweigh the financial responsible person screaming at in the back of my mind. However This conversation lead us to another venue of discussion.... what happens when the power dies. if you left your charger at home, or you phone or navi just stops working... are you going to be able to continue on as you did when it was working... let take the navi as an example... if your navi stops working, how many of you know how to use a map, and navigate? can you even find where you are on a map using the cross streets that your stopped at? This is a skill that unfortunatelly is overlooked these days in the child rearing process. Most people don't see the value, while some others can't do it themselves, so how can they teach their offspring? Take another example.... what if your phone gets lost... or stolen, or you drive over it? do you have the numbers in their memorized? are they recorded somewhere? I know that if it weren't for google sync I wouldn't have them backed up... not sufficiently. And what good does that do if you're in timbuckto and your phone dies, think you can get on the internet to look up those numbers? Don't get me wrong. I'm the first to see the value in technology, and am willing to pay the price to not have to wait for prices to come down. I will pay extra to have that newest thing right now. but let me tell you what.... I know that should I ever procreate it will be a requirement for my offspring (children) to learn how to do something manually before I'll let them use technology. Food for thought?? Let everyone else know what you think.... just sayin'

    Read the article

  • Ongoing confusion about ivars and properties in objective C

    - by Earl Grey
    After almost 8 months being in ios programming, I am again confused about the right approach. Maybe it is not the language but some OOP principle I am confused about. I don't know.. I was trying C# a few years back. There were fields (private variables, private data in an object), there were getters and setters (methods which exposed something to the world) ,and properties which was THE exposed thing. I liked the elegance of the solution, for example there could be a class that would have a property called DailyRevenue...a float...but there was no private variable called dailyRevenue, there was only a field - an array of single transaction revenues...and the getter for DailyRevenue property calculated the revenue transparently. If somehow the internals of daily revenue calculation would change, it would not affect somebody who consumed my DailyRevenue property in any way, since he would be shielded from getter implementation. I understood that sometimes there was , and sometimes there wasn't a 1-1 relationship between fields and properties. depending on the requirements. It seemed ok in my opinion. And that properties are THE way to acces the data in object. I know the difference betweeen private, protected, and public keyword. Now lets get to objectiveC. On what factor should I base my decision about making someting only an ivar or making it as a property? Is the mental model the same as I describe above? I know that ivars are "protected" by default, not "private" asi in c#..But thats ok I think, no big deal for my presnet level of understanding the whole ios development. The point is ivars are not accesible from outside (given i don't make them public..but i won't). The thing that clouds my clear understanding is that I can have IBOutlets from ivars. Why am I seeing internal object data in the UI? *Why is it ok?* On the other hand, if I make an IBOutlet from property, and I do not make it readonly, anybody can change it. Is this ok too? Let's say I have a ParseManager object. This object would use a built in Foundation framework class called NSXMLParser. Obviously my ParseManager will utilize this nsxmlparser's capabilities but will also do some additional work. Now my question is, who should initialize this NSXMLParser object and in which way should I make a reference to it from the ParseManager object, when there is a need to parse something. A) the ParseManager -1) in its default init method (possible here ivar - or - ivar+ppty) -2) with lazyloading in getter (required a ppty here) B) Some other object - who will pass a reference to NSXMLParser object to the ParseManager object. -1) in some custom initializer (initWithParser:(NSXMLPArser *) parser) when creating the ParseManager object.. A1 - the problem is, we create a parser and waste memory while it is not yet needed. However, we can be sure that all methods that are part ot ParserManager object, can use the ivar safely, since it exists. A2 - the problem is, the nsxmlparser is exposed to outside world, although it could be read only. Would we want a parser to be exposed in some scenario? B1 - this could maybe be useful when we would want to use more types of parsers..i dont know... I understand that architectural requirements and and language is not the same. But clearly the two are in relation. How to get out of that mess of my? Please bear with me, I wasn't able to come up with a single ultimate question. And secondly, it's better to not scare me with some superadvanced newspeak that talks about some crazy internals (what the compiler does) and edge cases.

    Read the article

  • Why you need to tag your build servers in TFS

    - by Martin Hinshelwood
    At SSW we use gated check-in for all of our projects. The benefits are based on the number of developers you have working on your project. Lets say you have 30 developers and each developer breaks the build once per month. That could mean that you have a broken build every day! Gated check-ins help, but they have a down side that manifests as queued builds and moaning developers. The way to combat this is to have more build servers, but with that comes complexity. Inevitably you will need to install components that you would expect to be installed on target computers, but how do you keep track of which build servers have which bits? What about a geographically diverse team? If you have a centrally controlled infrastructure you might have build servers in multiple regions and you don’t want teams in Sydney copying files from Beijing and vice a versa on a regular basis. So, what is the answer. Its Tags. You can add a set of Tags to your agents and then set which tags to look for in the build definition. Figure: Open up your Build Controller Manager Select “Build | Manage Build Controllers…” to get a list of all of your controllers and he build agents that are associated with them. Figure: the list of build agents and their controllers Each of these Agents might be subtly different. For example only one of these agents has FTP software installed. This software is required for only one of the many builds we have set up. My ethos for build servers is to keep them as clean as possible and not to install anything that is not absolutely necessary. For me that means anything that does not add a *.target file is suspect, and should really be under version control and called via the command line from there. So, some of the things you may install are: Silverlight 4 SDK Visual Studio 2010 Visual Studio 2008 WIX etc You should not install things that will not end up on the target users computer. For a website that means something different to a client than to a server, but I am sure you get the idea. One thing you can do to make things easier is to create a tag for each of the things that you install. that way developers can find the things they need. We may change to using a more generic tagging structure (Like “Web Application” or “WinForms Application”) if this gets too unwieldy, but for now the list of tags is limited. Figure: Tags associated with one of our build agents Once you have your Build Agents all tagged up ALL your builds will start to fail This is because the default setting for a build is to look for an Agent that exactly matches the tags for the build, and we have not added any yet. The quick way to fix this is to change the “Tag Comparison Operator” from “ExactMatch” to “MatchAtLease” to get your build immediately working. Figure: Tag Comparison Operator changes to MatchAtLeast to get builds to run. The next thing to do is look for specific tags. You just select from the list of available tags and the controller will make sure you get to a build agent that uses them. Figure: I want Silverlight, VS2010 and WIX, but do not care about Location. And there you go, you can now have build agents for different purposes and regions within the same environment. You can also use name filtering, so if you have a good Agent naming convention you can filter by that for regions. For example, your Agents might be “SYDVMAPTFSBP01” and “SYDVMAPTFSBP02” so a name filter of “SYD*” would target all of the Sydney build agents. Figure: Agent names can be used for filtering as well This flexibility will allow you to build better software by reducing the likelihood of not having a certain dependency on the target machines. Figure: Setting the name filter based on server location  Used in combination there is a lot of power here to coordinate tens of build servers for multiple projects across multiple regions so your developers get the most out of your environment. Technorati Tags: ALM,TFBS,TFS 2010,TFS Admin

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 24 Hours of PASS: Whine, Whine, Whine

    - by Most Valuable Yak (Rob Volk)
    24 Hours of Pass (or 24HOP) is a great program offered by PASS to provide free, online training for anyone who wants to learn more about SQL Server.  They routinely have the best SQL Server presenters available for these sessions, and attract hundreds, perhaps even a thousand attendees from around the world.  This is definitely one of the best things they've started doing in the past few years, and every session I've attended has been excellent. So why am I so grumpy about it? I'm not really, pretty much everything here is a minor annoyance that I can deal with.  However since they're so minor they seem to be things that can be easily corrected and would make the process much better. First off, this is my biggest gripe, the registration page: https://www323.livemeeting.com/lrs/8000181573/Registration.aspx?pageName=lj6378f4fhf5hpdm What grinds my gears about this?  I have to scroll alllllllllllllllllllllllllllll the way to bottom to actually register for the sessions.  This wouldn't be so bad except all the details of the session, including the presenter, is in a separate list at the top.  Both lists contain info the other does not, and scrolling between them to determine "Should I make time to listen to this?  Who is speaking at this time anyway?" is really unnecessary. My preference would be to keep the top list and add the checkboxes and schedule info in separate columns.  This is a full-width design, so there's plenty of space for this data, which is pretty small anyway.  The other huge benefit is halving the size of the page, which improves performance and lowers bandwidth usage considerably.  And if you know HTML/ASP.Net, and you view the page source, you can find PLENTY of other things that can be reduced even further.  (not just viewstate) One nice thing that PASS does is send iCal reminders to your email address so you can accept them to your calendar.  Again, they leave off the presenter in the appointment details, while still duplicating the meeting title in the body.  Sometimes I make decisions based on speaker rather than content (Natalie Portman is reading the Yellow Pages??? I'M THERE!) and having the speaker in the iCal is helpful. Next minor annoyances are the necessity for providing a company name, and the survey questions.  I know PASS needs to market themselves effectively, and they need information to do that, and since this is a free event it's really not worth complaining about, but why ask the survey question twice? (once at registration, once again when joining the LiveMeeting)  Same thing for the company name.  All of this should be tied to email address, so that's all I should need to enter when joining the LiveMeeting. The last one is also minor, but it irks me in this day and age of multiple browsers and the decline of Internet Explorer as a dominant platform.  The registration page was originally created in Visual Studio 2003, and has a lot of IE-specific crud representative of the browser situation of 2003. (IE5 references? really? and is the aforementioned viewstate big enough?)  This causes some grief with other browsers like Firefox, Chrome, and sometimes IE8 or 9.  And don't get me started on using the page on a Mac or in Safari. My main point is that PASS is an international organization, welcoming everyone from all levels of SQL Server proficiency, and in that spirit I think it would help to accommodate a wider range of browser software, especially since the registration page is extremely simple.  I recognize that this page is not hosted on the PASS website and may be maintained by some division of Microsoft, but to me that's even worse if MS can't update their own pages.  They've deprecated IE6, so they don't need to maintain support on their own websites anymore. OK, I'll shut up now. #sqlpass #24HOP

    Read the article

  • SQL SERVER – Performance Tuning Resolution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by MidnightDBAs. Taking resolutions is such an interesting subject. I think just like records, these are broken way more often. I find this is the funniest thing as we all take resolutions every year but not every year, we can manage to keep them. Well, does it mean we should not take resolutions? In fact I support resolutions. Every year, I take a resolution that I will strive reduce my body weight and I usually manage to keep eating healthy till the end of January. When February begins, I begin to loose focus from my goal and as March starts, the “As usual” eating habits begin. Looking at the positive side, what would happen if every year I do not eat healthy in January, I think that might cause terrible consequences to my health in the long run. So keeping resolutions is a good practise and following them to the extent one can is commendable. Let us come back to the world of SQL Server. What is my resolution for year 2011 for SQL Server? There are many, I am going to list three of very important resolutions that I have taken this new year over here. To understand SQL Server Performance Tuning at a deeper Level I think I am already half way through. I have been being very much busy during any given month doing hands-on performance tuning for at least 12 days on an average. That means, I am doing this activity for almost doing 2 weeks a month. I believe that I have a good understanding of the subject. Note that the word that I have used is “good,” and not “best.” There are often cases when I am stumped, and I have no clue of what to do next. Then, I usually go for my “trial and error” method - whichever method works, I make sure to keep a note on my blog. My goal is that I should never ever go for the trial and error method again to achieve the same solution. I should know the solution right away when I see the problem. I do understand that Performance Tuning can be a strange animal at times and one cannot guess the right step every time. However, aiming a high goal never hurts and I am going to learn more and more in this focused area. Going further from Basic BI understanding I do fairly decent with BI concepts. I know the nbasics of SSIS, SSRS, SSAS, PowerPivot and SharePoint (and few other things MDS, StreamInsight, etc). However, I still consider myself as a beginner. I do not have hands-on experience like many other BI Gurus around. I think I want to take my learning further in this direction. I do not want to be a BI expert as the first step but the goal is to move ahead from basic level towards an advanced level. I am going to start presenting in User Group Sessions and other places on this subject. When I have to prepare new subject for presentations, I think I force myself to learn more. I am committed to learn a bit more in this direction. Learning new features SQL Server 2011 Denali This is new thing from “Microsoft” for all the SQL Geeks. I am eagerly waiting for final product later this year and I am planning to learn it well. I think if I follow my above two goals, I think this goal will be automatically covered. I am eager and excited for this new offering from Microsoft. I guess, these are my resolutions; may be next year about the same time, I must revisit this post and see how much successful I am in following my goal. On a lighter note, I am particularly fan of following cartoon strip (Courtesy: Calvin and Hobbes). I think when we cannot resolve our resolutions, we tend to act like Calvin. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Visual Studio Little Wonders: Quick Launch / Quick Access

    - by James Michael Hare
    Once again, in this series of posts I look at features of Visual Studio that may seem trivial, but can help improve your efficiency as a developer. The index of all my past little wonders posts can be found here. Well, my friends, this post will be a bit short because I’m in the middle of a bit of a move at the moment.  But, that said, I didn’t want to let the blog go completely silent this week, so I decided to add another Little Wonder to the list for the Visual Studio IDE. How often have you wanted to change an option or execute a command in Visual Studio, but can’t remember where the darn thing is in the menu, settings, etc.?  If so, Quick Launch in VS2012 (or Quick Access in VS2010 with the Productivity Power Tools extension) is just for you! Quick Launch / Quick Access – find a command or option quickly For those of you using Visual Studio 2012, Quick Launch is built right into the IDE at the top of the title bar, near the minimize, maximize, and close buttons: But do not despair if you are using Visual Studio 2010, you can get Quick Access from the Productivity Power Tools extension.  To do this, you can go to the extension manager: And then go to the gallery and search for Productivity Power Tools and install it.  If you don’t have VS2012 yet, then the Productivity Power Tools is the next best thing.  This extension updates VS2010 with features such as Quick Access, the Solution Navigator, searchable Add Reference Dialog, better tab wells, etc.  I highly recommend it! But back to the topic at hand!  In VS2012 Quick Launch is built into the IDE and can be accessed by clicking in the Quick Launch area of the title bar, or by pressing CTRL+Q.  If you have VS2010 with the PPT installed, though, it is called Quick Access and is accessible through View –> Quick Access: Regardless of which IDE you are using, the feature behaves mostly the same.  It allows you to search all of Visual Studio’s commands and options for a particular topic.  For example, let’s say you want to change from tabs to tabs expanded to spaces, but don’t remember where that option is buried.  You can bring up Quick Launch / Quick Access and type in “tabs”: And it brings up a list of all options on tabs, you can then choose the one appropriate to you and click on it and it will take you right there! A lot easier than diving through the options tree to find what you are looking for!  It also works on menu commands, for example if you can’t remember how to open the Output window: It shows you the menu items that will get you to the Output window, and (if applicable) the keyboard shortcuts.  Again, clicking on one of these will perform the action for you as well. There are also some tasks you can perform directly from Quick Launch / Quick Access.  For example, perhaps you are one of those people who like to have the line numbers in your editor (I do), so let’s bring up Quick Launch / Quick Access and type “line numbers”: And let’s select Turn Line Numbers On, and now our editor looks like: And Voila!  We have line numbers in VS2010.  You can do this in VS2012 too, but it takes you to the option settings instead of directly turning them off and on.  There are bound to be differences between the way the two editors organize settings and commands, but you get the point. So, as you can see, the Quick Launch / Quick Access feature in Visual Studio makes it easy to jump right to the options, commands, or tasks you are interested in without all the digging. Summary An IDE as powerful as Visual Studio has so many options and commands that it can be confusing to remember how to find and invoke them.  Quick Launch (Quick Access in VS2010 with Productivity Power Tools extension) is a quick and handy way to jump to any of these options, commands, or tasks quickly without having to remember in what menu or screen they are buried!  Technorati Tags: C#,CSharp,.NET,Little Wonders,Visual Studio,Quick Access,Quick Launch

    Read the article

  • NetBeans PHP Community Council

    - by Tomas Mysik
    Hi all, today we would like to inform all of you that now you have a chance to improve NetBeans via NetBeans PHP Community Council. The author of this activity is Timur Poperecinii and he would like to tell you a few words about it. Hello passionate technical people, First of all let me introduce myself: my name is Timur, I’m a developer from Moldova (that little country between Romania and Ukraine), I develop mostly in .NET and JQuery, but I love to learn more, not being an expert I am familiar with Java (Struts2, Play), PHP (Symfony2), Ruby (Rails), Sencha Touch 2 and other technologies. I was “introduced” in PHP recently by a client of mine who requested to make the work specifically in PHP. Let me tell you a little story about my experience with open source and IDEs: when I was studying in university in 2007 I think, I did a simple little application in PHP and thought “Damn, if only there was a good IDE for PHP so I could relax and no having to remember all the function names”, then when I searched on internet pretty much everyone was using Vim or Emacs on Linux, but it had no autocomplete anyway, just syntax highlighting. I remember using some tool like Notepad++ I think. Nowadays everything changed, we have highlighting and autocomplete for about all standard things in PHP in many IDEs. I use NetBeans for PHP, and I really am happy with the experience I have there with standard PHP code, but for frameworks I still think there is lots of room for improvements. For example we have some Symfony 2 and Twig support. But I’d love to see more of that coming, for example I’m a big fan of file templates, where the main goal is to not waste time on writing over and over again something that can be generated, and it counts even more when you don’t have a lot of autocomplete. So what I thought, “Hey I know Java a little, and NetBeans has plugins, so may be it worth trying to do a file templates plugin”, and so I did, you can find details about my Unified Udevi Symfony2 Plugin for NetBeans 7.2 on my blog. It wasn’t hard, and it even was fun! Give back to open source Now think a little, NetBeans is an open source project and PHP support is just a part of it, so the resources are pretty limited in this area. But we as a community that uses this product, want to have the best possible experience with PHP and frameworks(!!!). So why don’t we GIVE BACK TO OPENSOURCE ? Imagine an IDE that can do all the things you wanted + it is free. Now how far is NetBeans from that point? I guess not so far – you might miss a little niche thing that you use on a daily basis, but then the question appears why don’t you make it happen on your own? NetBeans PHP Community Council What I proposed is to create a NetBeans PHP Community Council that will be formed of people willing to change something, willing to create plugins for their own needs and for the needs of the community, test the plugins created by them too, and basically evolve NetBeans in direction they want to reach. I already talked with the NetBeans PHP team. They are only happy to help this Council, with technical advises, opening some APIs we might need to have access to, and other things. One important thing to mention is that this Council is a Community project, so though we’ll have direct discussions with NetBeans PHP Dev team, NetBeans is not the leading force here, it is the community. You can see more details about the goals and structure I proposed at NetBeans PHP Community Council wiki page. We use this mail list: [email protected] for discussions and topics related to the Council. How can I join To join the NetBeans PHP Community Council please send an email to [email protected] with the subject of the mail starting with [Council New Member]. You can subscribe to this mail list here:http://netbeans.org/projects/php/lists. in your mail please indicate your location, age and experience both in Java and PHP. I need these data to assign you to a team. A response will be send to you with your next assignment and some people to contact. I really hope that you’ll make a step forward and try to make your everyday use of NetBeans even more fun.

    Read the article

  • Windows Azure Emulators On Your Desktop

    - by BuckWoody
    Many people feel they have to set up a full Azure subscription online to try out and develop on Windows Azure. But you don’t have to do that right away. In fact, you can download the Windows Azure Compute Emulator – a “cloud development environment” – right on your desktop. No, it’s not for production use, and no, you won’t have other people using your system as a cloud provider, and yes, there are some differences with Production Windows Azure, but you’ll be able code, run, test, diagnose, watch, change and configure code without having any connection to the Internet at all. The best thing about this approach is that when you are ready to deploy the code you’ve been testing, a few clicks deploys it to your subscription when you make one.   So what deep-magic does it take to run such a thing right on your laptop or even a Virtual PC? Well, it’s actually not all that difficult. You simply download and install the Windows Azure SDK (you can even get a free version of Visual Studio for it to run on – you’re welcome) from here: http://msdn.microsoft.com/en-us/windowsazure/cc974146.aspx   This SDK will also install the Windows Azure Compute Emulator and the Windows Azure Storage Emulator – and then you’re all set. Right-click the icon for Visual Studio and select “Run as Administrator”:    Now open a new “Cloud” type of project:   Add your Web and Worker Roles that you want to code:   And when you’re done with your design, press F5 to start the desktop version of Azure:   Want to learn more about what’s happening underneath? Right-click the tray icon with the Azure logo, and select the two emulators to see what they are doing:          In the configuration files, you’ll see a “Use Development Storage” setting. You can call the BLOB, Table or Queue storage and it will all run on your desktop. When you’re ready to deploy everything to Windows Azure, you simply change the configuration settings and add the storage keys and so on that you need.   Want to learn more about all this?   Overview of the Windows Azure Compute Emulator: http://msdn.microsoft.com/en-us/library/gg432968.aspx Overview of the Windows Azure Storage Emulator: http://msdn.microsoft.com/en-us/library/gg432983.aspx January 2011 Training Kit: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en      

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >