Search Results

Search found 4886 results on 196 pages for 'geeks on hugs'.

Page 101/196 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Are Google Maps Open?

    - by EmbeddedInsider
    Right now they are ‘free’ but it is clear what the path forward is:   4.3 Advertising. The Service currently does not include advertising in the maps images. However, Google reserves the right to include advertising in the maps images provided to you through the Service, but will provide you with ninety (90) days notice prior to the commencement of advertising in the maps images. Such notice may be provided on relevant Google websites, including but not limited to the Google Geo Developers Blog and the Google Maps API Group (or such successor URLs that Google may designate from time to time). During that 90 day period, you may terminate your use of the Service, or provide notice of your refusal to accept advertising in the maps images in accordance with Google's policies and procedures for providing such notice (which Google may make available from time to time in its sole discretion). Lawrence Ricci www.EmbeddedInsider.com

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • [Dear Recruiter] I developed in Mo'Fusion

    - by refuctored
    Forward: Sometimes I really feel like technology recruiters have no experience or knowledge of the field they are recruting for.  A warning to those companies hiring technical recruiters -- ensure that the technical recruiters you hire to fill a position are actually technical.  Here's proof below, where I make up completely ridiculous technologies, but still have interest from the recruiter for an interview. Letter to me: Hello - Your name came up as a possible match for a long term contract Cold Fusion Developer role I have in Bothell, WA.  This role requires you to be onsite in Bothell, WA. This is  a tough role to fill so I was hoping you might have someone you can recommend? Unfortunately no telecommute. Thank you! Sincerly, Mindy Recruiter My response: Mindy -- Wow I'm super-excited that you took the time to contact me about this position!  Let me tell you, you won't be disappointed with my skill set! Firstly, I've been developing in ColdFusion since 1993 before it was owned by Adobe and it was operating under code name, "Hot-Jack".  Recently I started developing under the Domain-View-Driven-Domain-Model (DVDDM), integrating client-side CF on Moobuntu.  Not only do I have a boat load of ColdFusion EXP,  I also have a ton of experience in the open source communities lesser known derivative of CF, Mo'Fusion (MF).  I've also invested thousands of hours of my time learning esoteric programming languages. Look forward to working with you! George And her response: Hi George – just left you a message. Give me a call at your convenience.  The role does require someone to be onsite here.. are you able to relocate yourself? Mindy [Sigh]

    Read the article

  • Dallas First Regionals 2012&ndash; For Inspiration and Recognition of Science and Technology

    - by T
    Wow!  That is all I have to say after the last 3 days. Three full fun filled days in a world that fed the geek, sparked the competitor, inspired the humanitarian, encouraged the inventor, and continuously warmed my heart.  As part of the Dallas First Regionals, I was awed by incredible students who teach as much as they learn, inventive and truly caring mentors that make mentoring look easy, and completely passionate and dedicated volunteers that bring meaning to giving all that you have and making events fun and safe for all.  If you have any interest in innovation, robotics, or highly motivated students, I can’t recommend anything any higher than visiting a First Robotics event. This is my third year with First and I was honored enough to serve the Dallas First Regionals as both a Web Site evaluator and as the East Field Volunteer Coordinator.  This was also the first year my daughter volunteered with me.  My daughter and I both recognize how different the First program is from other team events.  The difference with First is that everyone is a first class citizen.   It is a difference we can feel through experiencing and observing interactions between executives, respected engineers, students, and event staff.  Even with a veracious competition, you still find a lot of cooperation and we never witnessed any belittling between teams or individuals. First Robotics coined the term “Gracious Professionalism”.   It's a way of doing things that encourages high-quality work, emphasizes the value of others, and respects individuals and the community.1 I was introduced to this term as the Volunteer Coordinator when I was preparing the volunteer instructional speech.  Through the next few days, I discovered it is truly core to everything that First does.  One of the ways First accomplishes Gracious Professionalism is by utilizing another term they came up with which is “CoopertitionTM”. At FIRST, CoopertitionTM is displaying unqualified kindness and respect in the face of fierce competition.1   One of the things I never liked about sports was the need to “destroy” the other team.  First has really found a way to integrate CoopertitionTM  into the rules so teams can be competitive and part of that competition rewards cooperation. Oh and did I mention it has ROBOTS!!!  This year it had basket ball playing Kinect connected, remote controlled, robots!  There are not words for how exciting these games are.  You HAVE to check out this years game, Rebound Rumble, on youtube or you can view live action on Dallas First Video (as of this posting, the recording haven’t posted yet but should be there soon).  There are also some images below. Whatever it is, these students get it and exemplify it and these mentors ooze with it.  I am glad that First supports these events so we can all learn and be inspired by these exceptional students and mentors.  I know that no matter how much I give, it will never compare to what I gain volunteering at First.  Even if you don’t have time to volunteer, you owe it to yourself to go check out one of these events.  See what the future holds, be inspired, be encouraged, take some knowledge, leave some knowledge and most of all, have FUN.  That is what First is all about and thanks to First, I get it.   First Dallas Regionals 2012 VIEW SLIDE SHOW DOWNLOAD ALL 1.  USFirst http://www.usfirst.org/aboutus/gracious-professionalism

    Read the article

  • VS 11 Beta merge tool is awesome, except for resovling conflicts

    - by deadlydog
    If you've downloaded the new VS 11 Beta and done any merging, then you've probably seen the new diff and merge tools built into VS 11.  They are awesome, and by far a vast improvement over the ones included in VS 2010.  There is one problem with the merge tool though, and in my opinion it is huge.Basically the problem with the new VS 11 Beta merge tool is that when you are resolving conflicts after performing a merge, you cannot tell what changes were made in each file where the code is conflicting.  Was the conflicting code added, deleted, or modified in the source and target branches?  I don't know (without explicitly opening up the history of both the source and target files), and the merge tool doesn't tell me.  In my opinion this is a huge fail on the part of the designers/developers of the merge tool, as it actually forces me to either spend an extra minute for every conflict to view the source and target file history, or to go back to use the merge tool in VS 2010 to properly assess which changes I should take.I submitted this as a bug to Microsoft, but they say that this is intentional by design. WHAT?! So they purposely crippled their tool in order to make it pretty and keep the look consistent with the new diff tool?  That's like purposely putting a little hole in the bottom of your cup for design reasons to make it look cool.  Sure, the cup looks cool, but I'm not going to use it if it leaks all over the place and doesn't do the job that it is intended for. Bah! but I digress.Because this bug is apparently a feature, they asked me to open up a "feature request" to have the problem fixed. Please go vote up both my bug submission and the feature request so that this tool will actually be useful by the time the final VS 11 product is released.

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part II, Translation

    - by Ralf Westphal
    In my previous post I summarized the notation for Flow-Design (FD) diagrams. Now is the time to show you how to translate those diagrams into code. Hopefully you feel how different this is from UML. UML leaves you alone with your sequence diagram or component diagram or activity diagram. They leave it to you how to translate your elaborate design into code. Or maybe UML thinks it´s so easy no further explanations are needed? I don´t know. I just know that, as soon as people stop designing with UML and start coding, things end up to be very different from the design. And that´s bad. That degrades graphical designs to just time waste on paper (or some designer). I even believe that´s the reason why most programmers view textual source code as the only and single source of truth. Design and code usually do not match. FD is trying to change that. It wants to make true design a first class method in every developers toolchest. For that the first prerequisite is to be able to easily translate any design into code. Mechanically, without thinking. Even a compiler could do it :-) (More of that in some other article.) Translating to Methods The first translation I want to show you is for small designs. When you start using FD you should translate your diagrams like this. Functional units become methods. That´s it. An input-pin becomes a method parameter, an output-pin becomes a return value: The above is a part. But a board can be translated likewise and calls the nested FUs in order: In any case be sure to keep the board method clear of any and all business logic. It should not contain any control structures like if, switch, or a loop. Boards do just one thing: calling nested functional units in proper sequence. What about multiple input-pins? Try to avoid them. Replace them with a join returning a tuple: What about multiple output-pins? Try to avoid them. Or return a tuple. Or use out-parameters: But as I said, this simple translation is for simple designs only. Splits and joins are easily done with method translation: All pretty straightforward, isn´t it. But what about wires, named pins, entry points, explicit dependencies? I suggest you don´t use this kind of translation when your designs need these features. Translating to methods is for small scale designs like you might do once you´re working on the implementation of a part of a larger design. Or maybe for a code kata you´re doing in your local coding dojo. Instead of doing TDD try doing FD and translate your design into methods. You´ll see that way it´s much easier to work collaboratively on designs, remember them more easily, keep them clean, and lessen the need for refactoring. Translating to Events [coming soon]

    Read the article

  • APress Deal of the Day 18/May/2014 - Pro ASP.NET Web API

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/18/apress-deal-of-the-day-18may2014---pro-asp.net-web.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430247258 is Pro ASP.NET Web API. “With the new ASP.NET Web API framework, HTTP has become a first-class citizen of .NET. Pro ASP.NET Web API shows you how to put this new technology into practice to build flexible, extensible web services that run seamlessly on a range of operating systems and devices.”

    Read the article

  • Do you know when to send a done email in Scrum?

    - by Martin Hinshelwood
    At SSW we have always sent done emails to the owner/requestor to let them know that it is done. Others who are dependent on that tasks are CC’ed so they know they can proceed. But how does that fit into Scrum?   Update 14th April 2010 Rule added to Rules to better Scrum with TFS If you are working on a task: When you complete a Task that is part of a User Story you need to send a done email to the Owner of that Story. You only need to add the Task #, Summary and link to the item in WIWA. Remember that all your tasks should be under 4 hours, do spending lots of time on a Done Email for a Task would be counter productive. Add more information if required, for example you may have completed the task a different way than previously discussed.  Make sure that every User Story has an Owner as per the rules. If you are the owner of a story: When you complete a story you should send a comprehensive done email as per the rules when the story had been completed. Make sure you add a list of all of the Tasks that were completed as part of the story and the Done criteria that you completed. If your done criteria says: Built Successfully 30% Code Coverage All tests passed Then add an illustration to show this. Figure: Show that you have met your Done criteria where possible.   This is all designed to help you Scrum Team members (Product Owner, ScrumMaster and Team) validate the quality of the work that has been completed. Remember that you are not DONE until your team says you are done.   Technorati Tags: SSW,Scrum,SSW Rules

    Read the article

  • A lot has happened since last post!!!!!!!!!

    - by Ratman21
    And I mean a lot! First off had two interviews (one was selling insurance) and other installing cable. I have more hope for the cable one. Getting more emails from my job search engines (having problems with going through them and applying for those jobs, I know I can do). It seems the more I apply to, the more job emails pop up in my in box. But at the same time I am fighting feelings of worthlessness (18 months and no job). It is putting a strain on my marriage (We had had blow out over a broken drinking glass since I last posted).     But, at the same time, I am fighting mad about (a figure of speech, really) not having a job. Look just because I am over 55 and have gray hair. It does not mean, my brain is dead or I can not longer trouble shoot a router or circuit or LAN issue. Or that I can do “IT” work at all. And I could prove this if; some one would give me at job. Come on try me for 90 days at min. wage.   I know you will end up keeping me (hope fully at normal pay) around. Is any one hearing me…come on take up the challenge!

    Read the article

  • WCF Operations and Multidimensional Arrays

    - by JoshReuben
    You cant pass MultiD arrays accross the wire using WCF - you need to pass jagged arrays. heres 2 extension methods that will allow you to convert prior to serialzation and convert back after deserialization:         public static T[,] ToMultiD<T>(this T[][] jArray)         {             int i = jArray.Count();             int j = jArray.Select(x => x.Count()).Aggregate(0, (current, c) => (current > c) ? current : c);                         var mArray = new T[i, j];             for (int ii = 0; ii < i; ii++)             {                 for (int jj = 0; jj < j; jj++)                 {                     mArray[ii, jj] = jArray[ii][jj];                 }             }             return mArray;         }         public static T[][] ToJagged<T>(this T[,] mArray)         {             var cols = mArray.GetLength(0);             var rows = mArray.GetLength(1);             var jArray = new T[cols][];             for (int i = 0; i < cols; i++)             {                 jArray[i] = new T[rows];                 for (int j = 0; j < rows; j++)                 {                     jArray[i][j] = mArray[i, j];                 }             }             return jArray;         } enjoy!

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Disabling Navigation Flicks in WPF

    - by Brian Genisio's House Of Bilz
    I am currently working on a multi-touch application using WPF.  One thing that has been irritating me with this development is an automatic navigation forward/back command that is bound to forward and backwards flicks.  Many of my touch-based interactions were being thwarted by gestures picked up by WPF as navigation.  I just wanted to disable this behavior. My programmatic back/forward calls are not affected by this change, which is nice.  Here is how I did it:  In my main window, I added the following command bindings:<NavigationWindow.CommandBindings> <CommandBinding Command="NavigationCommands.BrowseBack" Executed="DoNothing" /> <CommandBinding Command="NavigationCommands.BrowseForward" Executed="DoNothing" /> </NavigationWindow.CommandBindings> Then, the DoNothing method in the code-behind does nothing:private void DoNothing(object sender, ExecutedRoutedEventArgs e) { } There may be a better way to do this, but I haven’t found one.

    Read the article

  • Make Your Menu Item Highlighted

    - by Shaun
    When I was working on the TalentOn project (Promotion in MSDN Chinese) I was asked to implement a functionality that makes the top menu items highlighted when the currently viewing page was in that section. This might be a common scenario in the web application development I think.   Simple Example When thinking about the solution of the highlighted menu items the biggest problem would be how to define the sections (menu item) and the pages it belongs to rather than making the menu highlighted. With the ASP.NET MVC framework we can use the controller – action infrastructure for us to achieve it. Each controllers would have a related menu item on the master page normally. The menu item would be highlighted if any of the views under this controller are being shown. Some specific menu items would be highlighted of that action was invoked, for example the home page, the about page, etc. The check rule can be specified on-demand. For example I can define the action LogOn and Register of Account controller should make the Account menu item highlighted while the ChangePassword should make the Profile menu item highlighted. I’m going to use the HtmlHelper to render the highlight-able menu item. The key point is that I need to pass the predication to check whether the current view belongs to this menu item which means this menu item should be highlighted or not. Hence I need a delegate as its parameter. The simplest code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace ShaunXu.Blogs.HighlighMenuItem 9: { 10: public static class HighlightMenuItemHelper 11: { 12: public static MvcHtmlString HighlightMenuItem(this HtmlHelper helper, 13: string text, string controllerName, string actionName, object routeData, object htmlAttributes, 14: string highlightText, object highlightHtmlAttributes, 15: Func<HtmlHelper, bool> highlightPredicate) 16: { 17: var shouldHighlight = highlightPredicate.Invoke(helper); 18: if (shouldHighlight) 19: { 20: return helper.ActionLink(string.IsNullOrWhiteSpace(highlightText) ? text : highlightText, 21: actionName, controllerName, routeData, highlightHtmlAttributes == null ? htmlAttributes : highlightHtmlAttributes); 22: } 23: else 24: { 25: return helper.ActionLink(text, actionName, controllerName, routeData, htmlAttributes); 26: } 27: } 28: } 29: } There are 3 groups of the parameters: the first group would be the same as the in-build ActionLink method parameters. It has the link text, controller name and action name, etc passed in so that I can render a valid linkage for the menu item. The second group would be more focus on the highlight link text and Html attributes. I will use them to render the highlight menu item. The third group, which contains one parameter, would be a predicate that tells me whether this menu item should be highlighted or not based on the user’s definition. And then I changed my master page of the sample MVC application. I let the Home and About menu highlighted only when the Index and About action are invoked. And I added a new menu named Account which should be highlighted for all actions/views under its Account controller. So my master would be like this. 1: <div id="menucontainer"> 2:  3: <ul id="menu"> 4: <li><% 1: : Html.HighlightMenuItem( 2: "Home", "Home", "Index", null, null, 3: "[Home]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "Index")%></li> 5:  6: <li><% 1: : Html.HighlightMenuItem( 2: "About", "Home", "About", null, null, 3: "[About]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Home" 5: && helper.ViewContext.RouteData.Values["action"].ToString() == "About")%></li> 7:  8: <li><% 1: : Html.HighlightMenuItem( 2: "Account", "Account", "LogOn", null, null, 3: "[Account]", null, 4: helper => helper.ViewContext.RouteData.Values["controller"].ToString() == "Account")%></li> 9: 10: </ul> 11:  12: </div> Note: You need to add the import section for the namespace “ShaunXu.Blogs.HighlighMenuItem” to make the extension method I created below available. So let’s see the result. When the home page was shown the Home menu was highlighted since at this moment it was controller = Home and action = Index. And if I clicked the About menu you can see it turned highlighted as now the action was About. And if I navigated to the register page the Account menu was highlighted since it should be like that when any actions under the Account controller was invoked.   Fluently Language Till now it’s a fully example for the highlight menu item but not perfect yet. Since the most common scenario would be: highlighted when the action invoked, or highlighted when any action was invoked under this controller, we can created 2 shortcut method so for them so that normally the developer will be no need to specify the delegation. Another place we can improve would be, to make the method more user-friendly, or I should say developer-friendly. As you can see when we want to add a highlight menu item we need to specify 8 parameters and we need to remember what they mean. In fact we can make the method more “fluently” so that the developer can have the hints when using it by the Visual Studio IntelliSense. Below is the full code for it. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using System.Web.Mvc.Html; 7:  8: namespace Ethos.Xrm.HR 9: { 10: #region Helper 11:  12: public static class HighlightActionMenuHelper 13: { 14: public static IHighlightActionMenuProviderAfterCreated HighlightActionMenu(this HtmlHelper helper) 15: { 16: return new HighlightActionMenuProvider(helper); 17: } 18: } 19:  20: #endregion 21:  22: #region Interfaces 23:  24: public interface IHighlightActionMenuProviderAfterCreated 25: { 26: IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName); 27: } 28:  29: public interface IHighlightActionMenuProviderAfterOn 30: { 31: IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes); 32: } 33:  34: public interface IHighlightActionMenuProviderAfterWith 35: { 36: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate); 37: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch(); 38: IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch(); 39: } 40:  41: public interface IHighlightActionMenuProviderAfterHighlightWhen 42: { 43: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText); 44: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes); 45: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText); 46: IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass); 47: } 48:  49: public interface IHighlightActionMenuProviderAfterApplyHighlightStyle 50: { 51: MvcHtmlString ToActionLink(); 52: } 53:  54: #endregion 55:  56: public class HighlightActionMenuProvider : 57: IHighlightActionMenuProviderAfterCreated, 58: IHighlightActionMenuProviderAfterOn, IHighlightActionMenuProviderAfterWith, 59: IHighlightActionMenuProviderAfterHighlightWhen, IHighlightActionMenuProviderAfterApplyHighlightStyle 60: { 61: private HtmlHelper _helper; 62:  63: private string _controllerName; 64: private string _actionName; 65: private string _text; 66: private object _routeData; 67: private object _htmlAttributes; 68:  69: private Func<HtmlHelper, bool> _highlightPredicate; 70:  71: private string _highlightText; 72: private object _highlightHtmlAttributes; 73:  74: public HighlightActionMenuProvider(HtmlHelper helper) 75: { 76: _helper = helper; 77: } 78:  79: public IHighlightActionMenuProviderAfterOn On(string actionName, string controllerName) 80: { 81: _actionName = actionName; 82: _controllerName = controllerName; 83: return this; 84: } 85:  86: public IHighlightActionMenuProviderAfterWith With(string text, object routeData, object htmlAttributes) 87: { 88: _text = text; 89: _routeData = routeData; 90: _htmlAttributes = htmlAttributes; 91: return this; 92: } 93:  94: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhen(Func<HtmlHelper, bool> predicate) 95: { 96: _highlightPredicate = predicate; 97: return this; 98: } 99:  100: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerMatch() 101: { 102: return HighlightWhen((helper) => 103: { 104: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower(); 105: }); 106: } 107:  108: public IHighlightActionMenuProviderAfterHighlightWhen HighlightWhenControllerAndActionMatch() 109: { 110: return HighlightWhen((helper) => 111: { 112: return helper.ViewContext.RouteData.Values["controller"].ToString().ToLower() == _controllerName.ToLower() && 113: helper.ViewContext.RouteData.Values["action"].ToString().ToLower() == _actionName.ToLower(); 114: }); 115: } 116:  117: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes, string highlightText) 118: { 119: _highlightText = highlightText; 120: _highlightHtmlAttributes = highlightHtmlAttributes; 121: return this; 122: } 123:  124: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(object highlightHtmlAttributes) 125: { 126: return ApplyHighlighStyle(highlightHtmlAttributes, _text); 127: } 128:  129: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass, string highlightText) 130: { 131: return ApplyHighlighStyle(new { @class = cssClass }, highlightText); 132: } 133:  134: public IHighlightActionMenuProviderAfterApplyHighlightStyle ApplyHighlighStyle(string cssClass) 135: { 136: return ApplyHighlighStyle(new { @class = cssClass }, _text); 137: } 138:  139: public MvcHtmlString ToActionLink() 140: { 141: if (_highlightPredicate.Invoke(_helper)) 142: { 143: // should be highlight 144: return _helper.ActionLink(_highlightText, _actionName, _controllerName, _routeData, _highlightHtmlAttributes); 145: } 146: else 147: { 148: // should not be highlight 149: return _helper.ActionLink(_text, _actionName, _controllerName, _routeData, _htmlAttributes); 150: } 151: } 152: } 153: } So in the master page when I need the highlight menu item I can “tell” the helper how it should be, just like this. 1: <li> 2: <% 1: : Html.HighlightActionMenu() 2: .On("Index", "Home") 3: .With(SiteMasterStrings.Home, null, null) 4: .HighlightWhenControllerMatch() 5: .ApplyHighlighStyle(new { style = "background:url(../../Content/Images/topmenu_bg.gif) repeat-x;text-decoration:none;color:#feffff;" }) 6: .ToActionLink() %> 3: </li> While I’m typing the code the IntelliSense will advise me that I need a highlight action menu, on the Index action of the Home controller, with the “Home” as its link text and no need the additional route data and Html attributes, and it should be highlighted when the controller was “Home”, and if it’s highlighted the style should be like this and finally render it to me. This is something we call “Fluently Language”. If you had been using Moq you will see that’s very development-friendly, document-ly and easy to read.   Summary In this post I demonstrated how to implement a highlight menu item in ASP.NET MVC by using its controller – action infrastructure. We can see the ASP.NET MVC helps us to organize our web application better. And then I also told a little bit more on the “Fluently Language” and showed how it will make our code better and easy to be used.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • New SQL Azure Development Accelerator Core promotional offer announced

    - by Eric Nelson
    This is (almost) a straight copy and paste but represents an important announcement worthy of a little more “exposure” :-) Starting August 1, 2010, we will release a new SQL Azure Development Accelerator Core promotional offer.  This new offer will give you the flexibility to purchase commitment quantities of SQL Azure Business Edition databases independent of other Windows Azure platform services at a deeply discounted monthly price.  The offer is valid only for a six month term.  You may purchase in 10 GB increments the amount of our Business Edition relational database that you require (each Business Edition database is capable of storing up to 50 GB).  The offer price will be $74.95 per 10 GB per month.  This promotional offer represents 25% off of our normal consumption rates.  Monthly Business Edition relational database usage exceeding the purchased commitment amount and usage for other Windows Azure platform services for this offer will be charged at our normal consumption rates.  Please click here for full details of our new SQL Azure Development Accelerator Core offer.  Related Links: Details of 5GB and 50GB databases have been released http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • Batch file to Delete Old Virtual Directories.

    - by Michael Freidgeim
    On some servers we have many old Virtual Directories created for previous versions of our application. IIS user interface allows to delete only one in a time. Fortunately we can use IIS scripts as described in How to manage Web sites and Web virtual directories by using command-line scripts in IIS 6.0 I've created batch file DeleteOldVDirs.cmd rem http://support.microsoft.com/kb/816568 rem syntax: iisvdir /delete WebSite [/Virtual Path]Name [/s Computer [/u [Domain\]User /p Password]] REM list all directories and create batch of deletes iisvdir /query "Default Web Site" echo "Enter Ctrl-C  if you want to stop deleting" Pause iisvdir /delete "Default Web Site/VDirName1" iisvdir /delete "Default Web Site/VDirName2" ...   If the name of WebSite or Virtual directory contain spaces(e.g  "Default Web Site"), don't forget to use double quotes. Note that the batch doesn't delete physical directories from flie system.You need to delete them using Windows Explorer, but it does support multiple selection!

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • Saint Louis Days of .NET 2012

    - by James Michael Hare
    Hey all, just a quick note to let you know I'll be one of the speakers at the St. Louis Days of .NET this year.  I'll be giving a revamped version of my Little Wonders (going to add some new ones to keep it fresh) -- and hopefully other presentations as well, the session selection process is ongoing.St. Louis Days of .NET is a wonderful conference in the midwest and a bargain to boot (only $175 if you register before July 1st!  Hope to see you there.For more information, visit: http://www.stlouisdayofdotnet.com/2012

    Read the article

  • Blog comment spammers &hellip; help!

    - by Steve Clements
    Hey, I need some advice/help, now I’ve been writing on this blog for a few years, I don’t blog a great deal in comparison to many of my peers and my blog doesn’t get a massive number of hits, BUT I seem to get a fair share of comments spammers!! I have an image verification on the comments and I have put an Akismet API key into the Geekswithblogs settings, but I still get a bunch of spam comments. They could almost be real, until you see the link going off to some dating site or casino crap. What does everyone do?? Delete them every time?? Moderate comments?? I would rather not moderate comments if possible, but is that the only way to stop the crap?? Thx Technorati Tags: spammers,comments,help

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Send Port

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas and laterly the creation of a receive port using the ODBC Adapter.  But what about creating a send port? Select to add a new Send Port, select the ODBC Adapter and click configure. Clicking Connection string will open the DataSource window. Choose one of your system datasources and press OK. This will now update the Transport properties.  Select okay. All that remains is to set the standard send port properties and your ODBC send port is now ready.

    Read the article

  • Reflecting on 2010 and Looking into 2011

    - by Sam Abraham
    In early 2010, I had blogged and shared my excitement as I was about to embark on a new journey relocating to South Florida.     As I settled down and adjusted to my new life, I was presented with an opportunity to get actively involved and volunteer in the local Florida .Net and Project Management communities.  I have since devoted a significant portion of my time to community initiatives, coordinating the West Palm Beach .Net User Group, volunteering as a member of the INETA Speaker’s Bureau and traveling to attend/speak at .Net code camps and user groups throughout the states of Florida and New York. I have also taken on various volunteer roles at the South Florida Chapter of the Project Management Institute starting as core team member on the chapter’s mentoring initiative and ending the year as Project Manager of the chapter’s mentoring program and as Director of Electronic Communications on the chapter’s IT team. I am also serving a one year term (2010-2011) as secretary and founding board member of Florida’s first official chapter of the International Association for Software Architects (IASA).   A big thank you is due for those who afforded me the opportunity and privilege to take part of these initiatives and those who provided guidance and encouragement when I needed them the most.   Looking ahead into 2011, I hope to continue my community involvement and volunteer activities. I will start by dedicating the first 5 weekends in the New Year to teach a free comprehensive Microsoft PowerPoint class at church. My goal will be to start from scratch and slowly cover the various available PowerPoint features that can be leveraged to create captivating presentations. Starting February, I will be resuming my user group/code camp speaking engagements at our South Florida .Net Code Camp and the West Palm Beach .Net User Group.   I look forward to continuing to meet, chat and share with our technical community members and to another active year in community service.   All the best, --Sam Abraham

    Read the article

  • MVVM Properties with Resharper

    - by George Evjen
    Read this early this morning and it is simple since we have all probably put together a code snippet. With the projects that we do at ArchitectNow we write alot of new custom views and view models, which results in having to write repetitive property code. We changed the context of the code a bit to suit our infrastructure but the idea is to have these properties created quickly. thanks to sparky dasrath for reminding us how easy this is to do sdasrath.blogspot.com/2011/02/20110221-resharper-c-snippet-for-mvvm.html

    Read the article

  • Retrieving Custom Attributes Using Reflection

    - by Scott Dorman
    The .NET Framework allows you to easily add metadata to your classes by using attributes. These attributes can be ones that the .NET Framework already provides, of which there are over 300, or you can create your own. Using reflection, the ways to retrieve the custom attributes of a type are: System.Reflection.MemberInfo public abstract object[] GetCustomAttributes(bool inherit); public abstract object[] GetCustomAttributes(Type attributeType, bool inherit); public abstract bool IsDefined(Type attributeType, bool inherit); System.Attribute public static Attribute[] GetCustomAttributes(MemberInfo member, bool inherit); public static bool IsDefined(MemberInfo element, Type attributeType, bool inherit); If you take the following simple class hierarchy: public abstract class BaseClass { private bool result;   [DefaultValue(false)] public virtual bool SimpleProperty { get { return this.result; } set { this.result = value; } } }   public class DerivedClass : BaseClass { public override bool SimpleProperty { get { return true; } set { base.SimpleProperty = value; } } } Given a PropertyInfo object (which is derived from MemberInfo, and represents a propery in reflection), you might expect that these methods would return the same result. Unfortunately, that isn’t the case. The MemberInfo methods strictly reflect the metadata definitions, ignoring the inherit parameter and not searching the inheritance chain when used with a PropertyInfo, EventInfo, or ParameterInfo object. It also returns all custom attribute instances, including those that don’t inherit from System.Attribute. The Attribute methods are closer to the implied behavior of the language (and probably closer to what you would naturally expect). They do respect the inherit parameter for PropertyInfo, EventInfo, and ParameterInfo objects and search the implied inheritance chain defined by the associated methods (in this case, the property accessors). These methods also only return custom attributes that inherit from System.Attribute. This is a fairly subtle difference that can produce very unexpected results if you aren’t careful. For example, to retrieve the custom  attributes defined on SimpleProperty, you could use code similar to this: PropertyInfo info = typeof(DerivedClass).GetProperty("SimpleProperty"); var attributeList1 = info.GetCustomAttributes(typeof(DefaultValueAttribute), true)); var attributeList2 = Attribute.GetCustomAttributes(info, typeof(DefaultValueAttribute), true));   The attributeList1 array will be empty while the attributeList2 array will contain the attribute instance, as expected. Technorati Tags: Reflection,Custom Attributes,PropertyInfo

    Read the article

  • If not now, then when?

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/10/25/if-not-now-then-when.aspx The time has been flying by this year. It seems like only yesterday that I mentioned the gorillagator, a simple construct of confusion to try to draw attention to my message. In reality, that message was sent over a month ago. During that time, the hours slipped to days and days to weeks. Many exciting things have happened to myself; I'm sure many exciting things have happened to you. I'm also sure that many terrifying things have happened to children and their families. 62 children enter treatment at a Children's Miracle Network Hospital every minute. That's nearly 60,000 children since I sent the last email. To put that number in perspective, that is more than the population of Greenland. If we expand that to the past year, they have been nearly 550,000 children treated. That is almost the population of Huntsville, Decatur, and all their suburbs combined. Over the past 4 years, I have raised a little more than $3,000 for Children's Hospital of Alabama. As a result, I received a call from the organizers of Extra Life thanking me for my dedicated work and informing me that I was the top supporters for Children's Hospital of Alabama ... with my measly three grand. We can do much better than that. It may sound like I'm trying to have fun by playing games for 24 hours. It is more than that. It is me using my time and body as a catalyst. It is me putting my passion to work for a cause. It is me turning my love into something tangible. I have been campaigning and fighting to give these children a chance for years. I have been asking you to help me support these children and families. I've been putting in countless hours of talking to people, impassioned emails, and carefully constructed tweets. I have been fighting with cutting edge, and sometimes expensive, technology to try to provide live streams of my marathons. I yearly put my body through 24 (and, this year, 25) hours of no sleep. I do this to represent the countless hours these families sit awake at their children's side. All I ask is a few minutes on a website and a few dollars. These few minutes and few dollars go a long way help people that are experiencing circumstances that only occur in our nightmares. I also ask that you take one extra step. Forward this plea to those that you know. I can only reach a small fraction of a percentage of the people that may be able to help. Together, we can reach the world. I raise money for Children's Hospital of Alabama. As this message branches out, people may wish to support a hospital closer to their area. I have included a link to the list of people that have dedicated their time and have received no donations. Find someone on the list supporting your local hospital and give them a donation. Let them know that their time and effort are appreciated. Together, we can do something great. Together, we can make a difference. Together, we all stand tall. Thank you. You can get more information at http://www.extra-life.org and http://childrensmiraclenetworkhospitals.org/" My donation page is http://www.extra-life.org/participant/cgardner The list of participants without donations is http://www.extra-life.org/index.cfm?fuseaction=donorDrive.eventParticipantList&page=629&eventID=512

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >