Search Results

Search found 4886 results on 196 pages for 'geeks on hugs'.

Page 90/196 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Getting Internal Name of a Share Point List Fields

    - by Gino Abraham
    Over the last 2 weeks i was developing a tool to migrate Lotus notes data base to Share point. The mapping between Lotus notes schema and share point list schema was done manually in an xml file for out tool. To map the columns we wanted internal names of each field. There are quite a few ways to achieve this, have explained few below. If you want internal names for one or 2 columns you can do so by navigating to the list setting and clicking on the column name. Once you are in column's details, you can check the query string of the page. The last item in the query string would be field's internal. Replace all "%5f" with '_' will give you the field internal name. In my case there were more than 80 columns. I used power shell to get the list of columns with details. Open windows Powershell and paste the following script after modifying the url and list name. [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") $site = new-object Microsoft.SharePoint.SPSite(http://yousitecolurl) $web = $site.OpenWeb() $list = $web.Lists["yourlist name"] $list.Fields | Format-Table Title, InternalName, TypeAsString I also found a tool in Codeplex.com which can generate a wrapper class for a list. The wrapper class will give you the guid and internal name for all fields in the list.  You can download the tool from http://imtech.codeplex.com/ Just enter the url in the text box and hit open. All the site content will be listed at the left hand side, expand the list, right click and select generate wrapper class.

    Read the article

  • sublime text 2 review !

    - by Anirudha
    Few months ago I am looking for a editor to doing simply edit on html,css,javascript.  Before it I tried notepad++ which is quite awesome to do all works I want to get done with him.  I choose 2 editor on my list. first is sublime text and second is phpstorm. both are cross-plateform. I tried both and both is working fine. I finally go with sublime text 2.   Here is the reason why sublime text 2 is awesome. phpstorm and sublime text 2 both is licensed  software. In sublime text 2 you can use it for unlimited time when phpstorm is available for 30 days only. Sublime text 2 is very memory efficient and lightweight software this is first thing people found best in sublime text 2. in phpstorm problem for me is sometime it’s goes unresponsive when I tried html5 boilerplate. sublime text 2 is never hang depend on memory size of project compare to phpstorm. in Sublime text 2 you can got better speed at coding after learning some shortcut and basic thing applied specially sublime text 2. Sublime text 2 come with distraction free mode when phpstorm have nothing with full-screen. Sublime text 2 support almost every language. I have seen many people in community who has move from their PHP IDE to sublime text 2. You can use LESS and coffeescript in it. There are many kind of customization out in github regarding sublime text 2.   In past I also have also tried webmatrix. the latest version of webmatrix have nothing good as sublime text 2. Sublime text 2 is best fit for my requirement.   So cheers, people should tried once Sublime text 2 if they are look for a solid tool for learning new things. sublime text 2 can be downloaded from http://www.sublimetext.com/.   Thanks for reading my post.

    Read the article

  • Total Solar Eclipse 13/November/2012

    - by TATWORTH
    On the 13/November/2012 there will be a total Solar Eclipse. The only land from which this will be visible is the northern part of Australia. Details of the Eclipse are at http://eclipse.gsfc.nasa.gov/SEmono/TSE2012/TSE2012.htmlCairns WEBCAM http://www.eclipsecairns.com/Wikipedia Entry http://en.wikipedia.org/wiki/Solar_eclipse_of_November_13,_2012Panasonic see  https://www.facebook.com/PanasonicEclipseLive?ref=stream?Main location channel from Sheraton Mirage Port Douglas Resort. http://www.ustream.tv/channel/panasonic-eclipse-live-by-solar-power-1 ?Second location channel from Fitzroy Islandhttp://www.ustream.tv/channel/panasonic-eclipse-live-by-solar-power-2

    Read the article

  • Building the Bootsector of BIOSLOADER

    - by Kate Moss' Open Space
    Windows CE is a 32 bits OS since day one, so it makes sense tools shipped with PB, compiler, linker, assembler and etc, are for targeting to 32 bits system. But occasionally, if you are developing x86 based system and especially working on some boot code, such as boot sector of BIOSLOADER, that will be a problem. Normally, as PB provides the prebuilt boot sector image but if you ever need to rebuilt it, what should you do? You may say as it's an x86, perhaps you can use VS or Windows SDK to build it. But unfortunately, today's desktop Windows tool chains are also 32 or even 64 bits only, you need to find something older. VC++ 6.0, but how can you find one? This Website http://thestarman.pcministry.com/asm/masm.htm arranges some useful resources. Basically, you need 2 thing, the 16 bits MASM and 16 bits linker. Just make it even easier for you Download http://download.microsoft.com/download/vb60ent/Update/6/W9X2KXP/EN-US/vcpp5.exe for Assembler (MASM). Download http://download.microsoft.com/download/vc15/Update/1/WIN98/EN-US/Lnk563.exe for the Linker. And then just extract the archives and what you need is ml.exe, ml.err and link.exe

    Read the article

  • The theory of evolution applied to software

    - by Michel Grootjans
    I recently realized the many parallels you can draw between the theory of evolution and evolving software. Evolution is not the proverbial million monkeys typing on a million typewriters, where one of them comes up with the complete works of Shakespeare. We would have noticed by now, since the proverbial monkeys are now blogging on the Internet ;-) One of the main ideas of the theory of evolution is the balance between random mutations and natural selection. Random mutations happen all the time: millions of mutations over millions of years. Most of them are totally useless. Some of them are beneficial to the evolved species. Natural selection favors the beneficially mutated species. Less beneficial mutations die off. The mutated rabbit doesn't have to be faster than the fox. It just has to be faster than the other rabbits.   Theory of evolution Evolving software Random mutations happen all the time. Most of these mutations are so bad, the new species dies off, or cannot reproduce. Developers write new code all the time. New ideas come up during the act of writing software. The really bad ones don't get past the stage of idea. The bad ones don't get committed to source control. Natural selection favors the beneficial mutated species Good ideas and new code gets discussed in group during informal peer review. Less than good code gets refactored. Enhanced code makes it more readable, maintainable... A good set of traits makes the species superior to others. It becomes widespread A good design tends to make it easier to add new features, easier to understand the current implementations, easier to optimize for performance...thus superior. The best designs get carried over from project to project. They appear in blogs, articles and books about principles, patterns and practices.   Of course the act of writing software is deliberate. This can hardly be called random mutations. Though it sometimes might seem that code evolves through a will of its own ;-) Does this mean that evolving software (evolution) is better than a big design up front (creationism)? Not necessarily. It's a false idea to think that a project starts from scratch and everything evolves from there. Everyone carries his experience of what works and what doesn't. Up front design is necessary, but is best kept simple and minimal, just enough to get you started. Let the good experiences and ideas help to drive the process, whether they come from you or from others, from past experience or from the most junior developer on your team. Once again, balance is the keyword. Balance design up front with evolution on a daily basis. How do you know what balance is right? Through your own experience of what worked and what didn't (here's evolution again). Notes: The evolution of software can quickly degenerate without discipline. TDD is a discipline that leaves little to chance on that part. Write your test to describe the new behavior. Write just enough code to make it behave as specified. Refactor to evolve the code to a higher standard. The responsibility of good design rests continuously on each developers' shoulders. Promiscuous pair programming helps quickly spreading the design to the whole team.

    Read the article

  • Windows Server 2008 R2 &ndash; MOSS 2007 &ndash; Internet Information Services is not installed

    - by Manesh Karunakaran
    If you get the following error, while running the SharePoint Products and Technologies Configuration Wizard: Internet Information Services is not installed. You must have Internet Information Services installed in order to use the SharePoint Products and Technologies Configuration Wizard     In order to resolve this, Open Server Manager, go to Roles and right click on Web Server   And in the Window that comes up, Enable the option that says IIS 6 Metabase Compatibility (Installed)

    Read the article

  • Web Sites All Start When Debugging a Web Site - Visual Studio 2010

    - by Daniel Lackey
    I wanted to blog about this because it was an annoyance to me and I couldn't figure out why for quite some time. Have you ever tried debugging one web application in your solution but when you do, all other web sites in your solution build and then start up their respective Visual Studio Development Server? It's not a major problem, but it adds time to waiting for what you are actually trying to debug to start up. After digging through Visual Studio 2010 settings, I finally found the option to turn it off. It is called Always Start When Debugging and is located in the Properties pane for the web project (click on the project .proj file in Visual Studio IDE). This is set to True by default each time you create a new Web Application project. Setting this to false will solve your problems. You will need to set this to false for all web applications in your solution as shown below: In addition, you can set properties on which port the development server uses each time it debugs. This is helpful if you want the port to stay the same for testing purposes. In contrast, you can set it to use a dynamic port each time so if you have a co-worker that is debugging it on a different session on the same server, you won't run into any problems with using the same port. The machine won't allow you to debug two sessions on the same port. Pretty basic stuff but it seemed like a really quirky setting to me.

    Read the article

  • Set Context User Principal for Customized Authentication in SignalR

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/27/set-context-user-principal-for-customized-authentication-in-signalr.aspxCurrently I'm working on a single page application project which is built on AngularJS and ASP.NET WebAPI. When I need to implement some features that needs real-time communication and push notifications from server side I decided to use SignalR. SignalR is a project currently developed by Microsoft to build web-based, read-time communication application. You can find it here. With a lot of introductions and guides it's not a difficult task to use SignalR with ASP.NET WebAPI and AngularJS. I followed this and this even though it's based on SignalR 1. But when I tried to implement the authentication for my SignalR I was struggled 2 days and finally I got a solution by myself. This might not be the best one but it actually solved all my problem.   In many articles it's said that you don't need to worry about the authentication of SignalR since it uses the web application authentication. For example if your web application utilizes form authentication, SignalR will use the user principal your web application authentication module resolved, check if the principal exist and authenticated. But in my solution my ASP.NET WebAPI, which is hosting SignalR as well, utilizes OAuth Bearer authentication. So when the SignalR connection was established the context user principal was empty. So I need to authentication and pass the principal by myself.   Firstly I need to create a class which delivered from "AuthorizeAttribute", that will takes the responsible for authenticate when SignalR connection established and any method was invoked. 1: public class QueryStringBearerAuthorizeAttribute : AuthorizeAttribute 2: { 3: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 4: { 5: } 6:  7: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 8: { 9: } 10: } The method "AuthorizeHubConnection" will be invoked when any SignalR connection was established. And here I'm going to retrieve the Bearer token from query string, try to decrypt and recover the login user's claims. 1: public override bool AuthorizeHubConnection(HubDescriptor hubDescriptor, IRequest request) 2: { 3: var dataProtectionProvider = new DpapiDataProtectionProvider(); 4: var secureDataFormat = new TicketDataFormat(dataProtectionProvider.Create()); 5: // authenticate by using bearer token in query string 6: var token = request.QueryString.Get(WebApiConfig.AuthenticationType); 7: var ticket = secureDataFormat.Unprotect(token); 8: if (ticket != null && ticket.Identity != null && ticket.Identity.IsAuthenticated) 9: { 10: // set the authenticated user principal into environment so that it can be used in the future 11: request.Environment["server.User"] = new ClaimsPrincipal(ticket.Identity); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } In the code above I created "TicketDataFormat" instance, which must be same as the one I used to generate the Bearer token when user logged in. Then I retrieve the token from request query string and unprotect it. If I got a valid ticket with identity and it's authenticated this means it's a valid token. Then I pass the user principal into request's environment property which can be used in nearly future. Since my website was built in AngularJS so the SignalR client was in pure JavaScript, and it's not support to set customized HTTP headers in SignalR JavaScript client, I have to pass the Bearer token through request query string. This is not a restriction of SignalR, but a restriction of WebSocket. For security reason WebSocket doesn't allow client to set customized HTTP headers from browser. Next, I need to implement the authentication logic in method "AuthorizeHubMethodInvocation" which will be invoked when any SignalR method was invoked. 1: public override bool AuthorizeHubMethodInvocation(IHubIncomingInvokerContext hubIncomingInvokerContext, bool appliesToMethod) 2: { 3: var connectionId = hubIncomingInvokerContext.Hub.Context.ConnectionId; 4: // check the authenticated user principal from environment 5: var environment = hubIncomingInvokerContext.Hub.Context.Request.Environment; 6: var principal = environment["server.User"] as ClaimsPrincipal; 7: if (principal != null && principal.Identity != null && principal.Identity.IsAuthenticated) 8: { 9: // create a new HubCallerContext instance with the principal generated from token 10: // and replace the current context so that in hubs we can retrieve current user identity 11: hubIncomingInvokerContext.Hub.Context = new HubCallerContext(new ServerRequest(environment), connectionId); 12: return true; 13: } 14: else 15: { 16: return false; 17: } 18: } Since I had passed the user principal into request environment in previous method, I can simply check if it exists and valid. If so, what I need is to pass the principal into context so that SignalR hub can use. Since the "User" property is all read-only in "hubIncomingInvokerContext", I have to create a new "ServerRequest" instance with principal assigned, and set to "hubIncomingInvokerContext.Hub.Context". After that, we can retrieve the principal in my Hubs through "Context.User" as below. 1: public class DefaultHub : Hub 2: { 3: public object Initialize(string host, string service, JObject payload) 4: { 5: var connectionId = Context.ConnectionId; 6: ... ... 7: var domain = string.Empty; 8: var identity = Context.User.Identity as ClaimsIdentity; 9: if (identity != null) 10: { 11: var claim = identity.FindFirst("Domain"); 12: if (claim != null) 13: { 14: domain = claim.Value; 15: } 16: } 17: ... ... 18: } 19: } Finally I just need to add my "QueryStringBearerAuthorizeAttribute" into the SignalR pipeline. 1: app.Map("/signalr", map => 2: { 3: // Setup the CORS middleware to run before SignalR. 4: // By default this will allow all origins. You can 5: // configure the set of origins and/or http verbs by 6: // providing a cors options with a different policy. 7: map.UseCors(CorsOptions.AllowAll); 8: var hubConfiguration = new HubConfiguration 9: { 10: // You can enable JSONP by uncommenting line below. 11: // JSONP requests are insecure but some older browsers (and some 12: // versions of IE) require JSONP to work cross domain 13: // EnableJSONP = true 14: EnableJavaScriptProxies = false 15: }; 16: // Require authentication for all hubs 17: var authorizer = new QueryStringBearerAuthorizeAttribute(); 18: var module = new AuthorizeModule(authorizer, authorizer); 19: GlobalHost.HubPipeline.AddModule(module); 20: // Run the SignalR pipeline. We're not using MapSignalR 21: // since this branch already runs under the "/signalr" path. 22: map.RunSignalR(hubConfiguration); 23: }); On the client side should pass the Bearer token through query string before I started the connection as below. 1: self.connection = $.hubConnection(signalrEndpoint); 2: self.proxy = self.connection.createHubProxy(hubName); 3: self.proxy.on(notifyEventName, function (event, payload) { 4: options.handler(event, payload); 5: }); 6: // add the authentication token to query string 7: // we cannot use http headers since web socket protocol doesn't support 8: self.connection.qs = { Bearer: AuthService.getToken() }; 9: // connection to hub 10: self.connection.start(); Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Getting codebaseHQ SVN ChangeLog data in your application

    - by saifkhan
    I deploy apps via ClickOnce. After each deployment we have to review the changes made and send out an email to the users with the changes. What I decided now to do is to use CodebaseHQ’s API to access a project’s SVN repository and display the commit notes so some users who download new updates can check what was changed or updated in an app. This saves a heck of a lot of time, especially when your apps are in beta and you are making several changes daily based on feedback. You can read up on their API here Here is a sample on how to access the Repositories API from a windows app Public Sub GetLog() If String.IsNullOrEmpty(_url) Then Exit Sub Dim answer As String = String.Empty Dim myReq As HttpWebRequest = WebRequest.Create(_url) With myReq .Headers.Add("Authorization", String.Format("Basic {0}", Convert.ToBase64String(Encoding.ASCII.GetBytes("username:password")))) .ContentType = "application/xml" .Accept = "application/xml" .Method = "POST" End With Try Using response As HttpWebResponse = myReq.GetResponse() Using sr As New System.IO.StreamReader(response.GetResponseStream()) answer = sr.ReadToEnd() Dim doc As XDocument = XDocument.Parse(answer) Dim commits = From commit In doc.Descendants("commit") _ Select Message = commit.Element("message").Value, _ AuthorName = commit.Element("author-name").Value, _ AuthoredDate = DateTime.Parse(commit.Element("authored-at").Value).Date grdLogData.BeginUpdate() grdLogData.DataSource = commits.ToList() grdLogData.EndUpdate() End Using End Using Catch ex As Exception MsgBox(ex.Message) End Try End Sub

    Read the article

  • KERPOOOOW!

    - by Matt Christian
    Recently I discovered the colorful world of comic books.  In the past I've read comics a few times but never really got into them.  When I wanted to start a collection I decided either video games or comics yet stayed away from comics because I am less familiar with them. In any case, I stopped by my local comic shop and picked up a few comics and a few trade paperbacks.  After reading them and understanding their basic flow I began to enjoy not only the stories but the art styles hiding behind those little white bubbles of text (well, they're USUALLY white).  My first stop at the comic store I ended up with: - Nemesis #1 (cover A) - Shuddertown #1 (cover A I think) - Daredevil: King of Hell's Kitchen Trade Paperback - Peter Parker: Spiderman - One Small Break Trade Paperback It took me about 3-4 days to read all of that including re-reading the single issues and glancing over the beginning of Daredevil again.  After a week of looking around online I knew a little more about the comics I wanted to pick up and the kind of art style I enjoyed.  While Peter Parker: Spiderman was ok, I really enjoyed the detailed, realistic look of Daredevil and Shuddertown. Now, a few years back I picked up the game The Darkness for PS3.  I knew it was based off a comic but never read the comic.  I decided I'd pick up a few issues of it and ended up with: - The Darkness #80 (cover A) - The Darkness #81 (cover A) - The Darkness #82 (cover A) - The Darkness #83 (cover A) - The Darkness Shadows and Flame #1  (one-shot; cover A) - The Darkness Origins: Volume 1 Trade Paperback (contains The Darkness #1-6) - New Age boards and bags for storing my comics The Darkness is relatively good though jumping from issue #6 to issue #80 I lost a bit on who the enemy in the current series is.  I think out of all of them, issue #83 was my favorite of them. I'm signed up at the local shop to continue getting Nemesis, The Darkness, and Shuddertown, and I'll probably pick up a few different ones this weekend...

    Read the article

  • Applications: Colliding Marbles in C Sharp

    - by TechTwaddle
    If you follow this blog, you know how much I love marbles. I was staying up for Microsoft's "It's Time To Share" event and I thought I'll write up a C# version of Colliding Marbles. It's a pretty straight forward port from the native version, the only major difference being in the drawing primitives. Video follows. The solution was created using Visual Studio 2008 and the source code is shared below. Source Code: CollidingMarbles.zip [Shared on SkyDrive] Video,

    Read the article

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

  • SharePoint Saturday Michigan Is Coming Up!

    - by Brian Jackett
    Next Saturday March 13th Ann Arbor, MI will be hosting SharePoint Saturday Michigan (SPSMI).  For those unfamiliar, SharePoint Saturday is a community driven event where various regional and national speakers gather to present at a FREE conference on all topics related to SharePoint.  This will be my third SharePoint Saturday and second one I’ve had the honor of presenting at.  My presentation is titled “Real World Deployment of SharePoint 2007 Solutions“ (click here for the SpeakerRate link.)     After taking a look at the speaker and session list I can tell you with great excitement that this event is packed with great speakers and topics.  Register here and come on out to SharePoint Saturday Michigan on March 13th.  If you’re attending feel free to track me down and say hi.  See you there.         -Frog Out

    Read the article

  • Addicted to the MIX Buzz

    - by Dave Campbell
    Well it's the Friday before MIX10, and I'm officially of no use to anybody. I'll be driving up to 'Vegas Sunday ... hopefully rolling in mid-late afternoon, checking in at my $31.50/night (including WiFi) Motel, and getting registered then hanging out around registration to see who is there. First organized thing to do is 9PM, so I'm open to suggestions Sunday evening... maybe we can get a gang together for dinner ?? Monday is the Keynote ... I'm addicted to the buzz in the ballroom the first day, hope to be close to the front, trying to live blog. Then straight to Ballroom A and stake out the spot I'll be in for all 3 days, and you all know why :) I've tagged 40 sessions that I 'want' to see, and there's only 12 slots... damn... if I could, I'd try the Multiplicity thing, but I'm afraid I'd get the idiot first try -- or maybe got that one already :) ... but at least I tagged them to make it easy to find after the videos are up. Stuff going on Sunday, Monday, and Tuesday night. I'm staying over for an event on Thursday, and driving back on Friday. I'm not sure how much blogging I'll be doing, but I'll try to hit some 'Cream high spots. I'm sure everyone #NotAtMIX is going to be tuned into the sessions online. I'll be wearing TShirts with WynApse.com and SilverlightCream.com printed on the back... so if you see some old curmudgeon with such a shirt, IT'S ME! I look forward to seeing all the people I only see once or twice a year, and meeting ones I haven't met yet What a week... Bring It On and Stay in the 'Light! Technorati Tags: Silverlight    Silverlight 4    MIX10

    Read the article

  • EF4 Code First Control Unicode and Decimal Precision, Scale with Attributes

    - by Dane Morgridge
    There are several attributes available when using code first with the Entity Framework 4 CTP5 Code First option.  When working with strings you can use [MaxLength(length)] to control the length and [Required] will work on all properties.  But there are a few things missing. By default all string will be created using unicode so you will get nvarchar instead of varchar.  You can change this using the fluent API or you can create an attribute to make the change.  If you have a lot of properties, the attribute will be much easier and require less code. You will need to add two classes to your project to create the attribute itself: 1: public class UnicodeAttribute : Attribute 2: { 3: bool _isUnicode; 4:  5: public UnicodeAttribute(bool isUnicode) 6: { 7: _isUnicode = isUnicode; 8: } 9:  10: public bool IsUnicode { get { return _isUnicode; } } 11: } 12:  13: public class UnicodeAttributeConvention : AttributeConfigurationConvention<PropertyInfo, StringPropertyConfiguration, UnicodeAttribute> 14: { 15: public override void Apply(PropertyInfo memberInfo, StringPropertyConfiguration configuration, UnicodeAttribute attribute) 16: { 17: configuration.IsUnicode = attribute.IsUnicode; 18: } 19: } The UnicodeAttribue class gives you a [Unicode] attribute that you can use on your properties and the UnicodeAttributeConvention will tell EF how to handle the attribute. You will need to add a line to the OnModelCreating method inside your context for EF to recognize the attribute: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: base.OnModelCreating(modelBuilder); 5: } Once you have this done, you can use the attribute in your classes to make sure that you get database types of varchar instead of nvarchar: 1: [Unicode(false)] 2: public string Name { get; set; }   Another option that is missing is the ability to set the precision and scale on a decimal.  By default decimals get created as (18,0).  If you need decimals to be something like (9,2) then you can once again use the fluent API or create a custom attribute.  As with the unicode attribute, you will need to add two classes to your project: 1: public class DecimalPrecisionAttribute : Attribute 2: { 3: int _precision; 4: private int _scale; 5:  6: public DecimalPrecisionAttribute(int precision, int scale) 7: { 8: _precision = precision; 9: _scale = scale; 10: } 11:  12: public int Precision { get { return _precision; } } 13: public int Scale { get { return _scale; } } 14: } 15:  16: public class DecimalPrecisionAttributeConvention : AttributeConfigurationConvention<PropertyInfo, DecimalPropertyConfiguration, DecimalPrecisionAttribute> 17: { 18: public override void Apply(PropertyInfo memberInfo, DecimalPropertyConfiguration configuration, DecimalPrecisionAttribute attribute) 19: { 20: configuration.Precision = Convert.ToByte(attribute.Precision); 21: configuration.Scale = Convert.ToByte(attribute.Scale); 22:  23: } 24: } Add your line to the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: modelBuilder.Conventions.Add(new DecimalPrecisionAttributeConvention()); 5: base.OnModelCreating(modelBuilder); 6: } Now you can use the following on your properties: 1: [DecimalPrecision(9,2)] 2: public decimal Cost { get; set; } Both these options use the same concepts so if there are other attributes that you want to use, you can create them quite simply.  The key to it all is the PropertyConfiguration classes.   If there is a class for the datatype, then you should be able to write an attribute to set almost everything you need.  You could also create a single attribute to encapsulate all of the possible string combinations instead of having multiple attributes on each property. All in all, I am loving code first and having attributes to control database generation instead of using the fluent API is huge and saves me a great deal of time.

    Read the article

  • XNA Notes 001

    - by George Clingerman
    Just a quick recap of things I noticed going on in or around the XNA community this past week. I’m sure there’s a lot I missed (it’s a pretty big community with lots of different parts to it) but these where the things I caught that I thought were pretty cool. The XNA Team Michael Klucher gave a list of books every gamer should read. http://twitter.com/#!/mklucher/status/22313041135673344 Shawn Hargreaves posted Nelxon Studio posting about a cheatsheet for converting 3.1 to 4.0 http://blogs.msdn.com/b/shawnhar/archive/2011/01/04/xna-3-1-to-4-0-cheat-sheet.aspx?utm_source=twitterfeed&utm_medium=twitter XNA Game Studio won the Frontline award for Programming Tool by GameDev magazine! Congrats to the XNA team! http://www.gdmag.com/homepage.htm XNA MVPs In January several MVPs were up for re-election, Jim Perry, Andy ‘The ZMan’ Dunn, Glenn Wilson and myself were all re-award a Microsoft MVP award for their contributions to the XNA/DirectX communities. https://mvp.support.microsoft.com/communities/mvp.aspx?product=1&competency=XNA%2fDirectX A movement to get Michael McLaughlin an MVP award has started and you can join in too! http://twitter.com/#!/theBigDaddio/status/22744458621620224 http://www.xnadevelopment.com/MVP/MichaelMcLaughlinMVP.txt Don’t forget you can nominate ANYONE for a MVP award, that’s how they work. https://mvp.support.microsoft.com/gp/mvpbecoming  XNA Developers James Silva of Ska Studios hit 9,200 sales of ZP2KX and recommends you listen to Infected Mushroom. http://twitter.com/#!/Jamezila/status/22538865357094912 http://en.wikipedia.org/wiki/Infected_Mushroom Noogy creator of the upcoming XBLA title Dust an Elysian tail posts some details into his art creation. http://noogy.com/image/statue/statue.html Xbox LIVE Indie Game News Microsoft posts acknowledging there was an issue with the sales data that has been addressed and apologized for not posting about it sooner. http://forums.create.msdn.com/forums/p/71347/436154.aspx#436154 Winter Uprising sales still chugging along and being updated by Xalterax (by those developers willing to actually share sales numbers. Thanks for sharing guys, much appreciated!) http://forums.create.msdn.com/forums/t/70147.aspx Don’t forget about Dream Build Play coming up in February! http://www.dreambuildplay.com/Main/Home.aspx The Best Xbox LIVE Indie Games December Edition comes out on NeoGaf http://www.neogaf.com/forum/showthread.php?t=414485 The Greatest XBox LIVE Indie Games of 2010 on DealSpwn – Congrats to DrMistry and MStarGames for his #1 spot with his massive XBLIG Space Pirates From Tomorrow! http://www.dealspwn.com/xbligoty-2010/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Dealspwn+%28Dealspwn%29 XNA Game Development The future of XACT and WP7 has finally been confirmed and we finally know what our options are for looping audio seamlessly on WP7. http://forums.create.msdn.com/forums/p/61826/436639.aspx#436639  Super Mario 3 Design Notes is an interesting read for XBLIG developers, giving some insight to the training that natural occurs for players as they start playing the game. Good things for XBLIG developers to think about. http://www.significant-bits.com/super-mario-bros-3-level-design-lessons

    Read the article

  • Start Your Engines

    - by Richard Jones
    Just passing on the good news from MIX Keynote yesterday. The CTP Developers Kit for Windows Phone 7 Series, is available here. http://www.microsoft.com/downloads/details.aspx?FamilyID=2338b5d1-79d8-46af-b828-380b0f854203&displaylang=en#filelist First impressions are great.   Hello World up and running in under 2 minutes - Technorati Tags: Windows Pgone 7 Series

    Read the article

  • Management Reporter Installation – Lessons Learned

    - by Ryan McBee
    After successfully completing several installations of Management Reporter this year, I wanted to share a few lessons learned that should help you. First, you will want to make sure that you install Management Reporter under a domain account as opposed to a local system or network service account. Management Reporter gives you the option to install under these accounts, but it is a be a best practice approach to use a domain account. Upon installation of Management Report, you will want to make sure that Directory Browsing is enabled within the IIS server of your site or you will have problems when you go to use Management Reporter. By default, it will be disabled in Server 2008 R2 and you will need to make the setting change under the Actions pane shown below. Lastly, you will want to make sure that SQL Server is running under a domain account. I have had multiple situations where reports have been stuck in the Queued status rather than Processing status of Management Reporter. After reviewing resolution 5 of KB 2298248, it was determined that running SQL Server under a domain account is the way to go.

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Receive Location

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas.  But what about creating a receive location? An ODBC receive location will periodically poll the configured database using the stored procedure or SQL string defined in your request schema. If you need to, begin by adding a new receive port to your BizTalk configuration. Create a new receive location and select to use the ODBC adapter and click Address. You will now be shown the ODBC Community Adapter Transport properties window.  Select connection string and you will be shown the Choose data Source window.  If you have already created the Test Database source when generating a schema from ODBC this will be shown (if not go and take a look in my previous post to see how this is done).   You will then need to choose the SQL command that will be run by the recieve port.  In this case I have deployed the Test Mapping schemas that I created previously and selected the Request schema. You should now have populated the appropriate properties for the ODBC Com Adapter. Finally set the standard receive location properties and your ODBC receive location is now ready.

    Read the article

  • Imaginet Resources acquires Notion Solutions

    - by Aaron Kowall
    Huge news for my company and me especially. http://www.imaginets.com/news--events/imaginet_acquisition_notion.html With the acquisition we become a very significant player in the Microsoft ALM space.  This increases our scale significantly and also our knowledgebase.  We now have a 2 Regional Directors and a pile of MS MVP’s. The timing couldn’t be more perfect since the launch of Visual Studio 2010 and Team Foundation Server 2010 is TODAY!! Oh, and we aren’t done with announcements today… More later. Technorati Tags: VS 2010,TFS 2010,Notion,Imaginet

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • Quick and Dirty way to search for characters in a string

    - by mbcrump
    I saw this today on StackOverflow, all credit goes to Jon Skeet. I have always used RegEx or loops to check for special chars. This is a more human-friendly way of doing it.   Code Snippet using System;   class Program {     //field     private static readonly char[] SpecialChars = "!@#$%^&*()".ToCharArray();       static void Main(string[] args)     {         //text to search for         string text = "michael";         int indexOf = text.IndexOfAny(SpecialChars);               if (indexOf == -1)             {                 Console.WriteLine("no special char");             }               Console.ReadLine();     } }

    Read the article

  • Windows Desktop Virtualization Gets Easier

    - by andrewbrust
    This past Thursday, Microsoft announced that Windows (7) Virtual PC (WVPC) and its XP Mode feature would no longer require hardware assisted virtualization (HAV).  That means any PC running Windows 7 Pro, or higher, can now run this software.  And that’s a great thing because, as I noted in a post almost five month ago, determining whether a given PC you might be planning to buy actually offers HAV can be extremely difficult.  That meant even dedicated, sophisticated PC users, with a budget for new hardware, might be blocked from using this technology.  And that was just plain silly. One of the features offered by WVPC, and utilized heavily by XP Mode, is the concept of virtual applications: apps within a guest VM that can actually run within the host’s desktop environment.  I find this feature so powerful that my February Redmond Review column entertained the notion of a future version of Windows that runs all applications in this manner. The elimination of the HAV requirement for XP Mode and WVPC was just one of many virtualization-related announcements Microsoft made on Thursday.  And, interestingly, most of the others were also desktop-related, rather than server-related.  This is a welcome change from the multi-year period in which Microsoft enhanced its server virtualization lineup (in Hyper-V) and let the desktop platform fester.  Microsoft now seems to understand desktop virtualization is in high-demand and strengthens the Windows franchise.  As I explained in the column, even cloud computing can have a desktop spin if desktop virtualization is part of the equation. One company that knows this well is Citrix, and a closer alliance between Microsoft and Citrix was one of the many announcements from Thursday.  In fact, there’s a whole Web site dedicated to the alliance at http://www.citrixandmicrosoft.com/. I’d love to see virtual applications and entire virtual desktops offered as Azure-branded services.  This could allow me to run, for example, the full Office client on a variety of desktops I might use, and for large organizations it could easily reduce the expense, burden and duration of the deployment cycle for new versions of Office.  Business Intelligence providers, including my own firm, twentysix New York, would find great relief in enabling their customers to run the newest version of Excel, with the latest BI capabilities, instead of having to wait the requisite two to three years it takes for many Fortune 500 customers to upgrade. Microsoft should do more, and faster.  WVPC still does not support 64-bit guest images, even on 64-bit hosts.  That needs to be fixed.  File access from the guest to the host needs to be improved (right now, it’s done through Terminal Services/Remote Desktop file sharing, and it’s slow) and VM load times need to be significantly reduced before virtualized apps can become the norm.  (I suppose the advance of solid state drive technology will help there.) I do think these improvements will come, because Microsoft is focused on the virtual desktop now.  And that’s a smart focus to have.

    Read the article

  • Responsive Design: Media query fix for IE10 on Windows Phone 8

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2013/07/01/responsive-design-media-query-fix-for-ie10-on--windows.aspxThe version of IE10 on Windows Phone 8 apparently has a bug which results in media queries not seeing the correct device width.This post from Devhammer explains all.http://devhammer.net/responsive-design-fix-for-windows-phone-8-device-adaptationI'd not noticed this on the WP8 Emulator which proves yet again that testing on real devices is essential.

    Read the article

  • MVC Portable Areas &ndash; Static Files as Embedded Resources

    - by Steve Michelotti
    This is the third post in a series related to build and deployment considerations as I’ve been exploring MVC Portable Areas: #1 – Using Web Application Project to build portable areas #2 – Conventions for deploying portable area static files #3 – Portable area static files as embedded resources In the last post, I walked through a convention for managing static files.  In this post I’ll discuss another approach to manage static files (e.g., images, css, js, etc.).  With this approach, you *also* compile the static files as embedded resources into the assembly similar to the *.aspx pages. Once again, you can set this to happen automatically by simply modifying your *.csproj file to include the desired extensions so you don’t have to remember every time you add a file: 1: <Target Name="BeforeBuild"> 2: <ItemGroup> 3: <EmbeddedResource Include="**\*.aspx;**\*.ascx;**\*.gif;**\*.css;**\*.js" /> 4: </ItemGroup> 5: </Target> We now need a reliable way to serve up these static files that are embedded in the assembly. There are a couple of ways to do this but one way is to simply create a Resource controller whose job is dedicated to doing this. 1: public class ResourceController : Controller 2: { 3: public ActionResult Index(string resourceName) 4: { 5: var contentType = GetContentType(resourceName); 6: var resourceStream = Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName); 7:   8: return this.File(resourceStream, contentType); 9: return View(); 10: } 11:   12: private static string GetContentType(string resourceName) 13: { 14: var extention = resourceName.Substring(resourceName.LastIndexOf('.')).ToLower(); 15: switch (extention) 16: { 17: case ".gif": 18: return "image/gif"; 19: case ".js": 20: return "text/javascript"; 21: case ".css": 22: return "text/css"; 23: default: 24: return "text/html"; 25: } 26: } 27: } In order to use this controller, we need to make sure we’ve registered the route in our portable area registration (shown in lines 5-6): 1: public class WidgetAreaRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "widget1/resource/{resourceName}", 6: new { controller = "Resource", action = "Index" }); 7:   8: context.MapRoute("Widget1", "widget1/{controller}/{action}", new 9: { 10: controller = "Home", 11: action = "Index" 12: }); 13:   14: RegisterTheViewsInTheEmbeddedViewEngine(GetType()); 15: } 16:   17: public override string AreaName 18: { 19: get { return "Widget1"; } 20: } 21: } In my previous post, we relied on a custom Url helper method to find the actual physical path to the static file like this: 1: <img src="<%: Url.AreaContent("/images/arrow.gif") %>" /> Hello World! However, since we are now embedding the files inside the assembly, we no longer have to worry about the physical path. We can change this line of code to this: 1: <img src="<%: Url.Resource("Widget1.images.arrow.gif") %>" /> Hello World! Note that I had to fully quality the resource name (with namespace and physical location) since that is how .NET assemblies store embedded resources. I also created my own Url helper method called Resource which looks like this: 1: public static string Resource(this UrlHelper urlHelper, string resourceName) 2: { 3: var areaName = (string)urlHelper.RequestContext.RouteData.DataTokens["area"]; 4: return urlHelper.Action("Index", "Resource", new { resourceName = resourceName, area = areaName }); 5: } This method gives us the convenience of not having to know how to construct the URL – but just allowing us to refer to the resource name. The resulting html for the image tag is: 1: <img src="/widget1/resource/Widget1.images.arrow.gif" /> so we can always request any image from the browser directly. This is almost analogous to the WebResource.axd file but for MVC. What is interesting though is that we can encapsulate each one of these so that each area can have it’s own set of resources and they are easily distinguished because the area name is the first segment of the route. This makes me wonder if something like this ResourceController should be baked into portable areas itself. I’m definitely interested in anyone has any opinions on it or have taken alternative approaches.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >