Search Results

Search found 11208 results on 449 pages for 'developer conference'.

Page 378/449 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • MVC Portable Area Modules *Without* MasterPages

    - by Steve Michelotti
    Portable Areas from MvcContrib provide a great way to build modular and composite applications on top of MVC. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. I’ve blogged about Portable Areas in the past including this post here which talks about embedding resources and you can read more of an intro to Portable Areas here. As great as Portable Areas are, the question that seems to come up the most is: what about MasterPages? MasterPages seems to be the one thing that doesn’t work elegantly with portable areas because you specify the MasterPage in the @Page directive and it won’t use the same mechanism of the view engine so you can’t just embed them as resources. This means that you end up referencing a MasterPage that exists in the host application but not in your portable area. If you name the ContentPlaceHolderId’s correctly, it will work – but it all seems a little fragile. Ultimately, what I want is to be able to build a portable area as a module which has no knowledge of the host application. I want to be able to invoke the module by a full route on the user’s browser and it gets invoked and “automatically appears” inside the application’s visual chrome just like a MasterPage. So how could we accomplish this with portable areas? With this question in mind, I looked around at what other people are doing to address similar problems. Specifically, I immediately looked at how the Orchard team is handling this and I found it very compelling. Basically Orchard has its own custom layout/theme framework (utilizing a custom view engine) that allows you to build your module without any regard to the host. You simply decorate your controller with the [Themed] attribute and it will render with the outer chrome around it: 1: [Themed] 2: public class HomeController : Controller Here is the slide from the Orchard talk at this year MIX conference which shows how it conceptually works:   It’s pretty cool stuff.  So I figure, it must not be too difficult to incorporate this into the portable areas view engine as an optional piece of functionality. In fact, I’ll even simplify it a little – rather than have 1) Document.aspx, 2) Layout.ascx, and 3) <view>.ascx (as shown in the picture above); I’ll just have the outer page be “Chrome.aspx” and then the specific view in question. The Chrome.aspx not only takes the place of the MasterPage, but now since we’re no longer constrained by the MasterPage infrastructure, we have the choice of the Chrome.aspx living in the host or inside the portable areas as another embedded resource! Disclaimer: credit where credit is due – much of the code from this post is me re-purposing the Orchard code to suit my needs. To avoid confusion with Orchard, I’m going to refer to my implementation (which will be based on theirs) as a Chrome rather than a Theme. The first step I’ll take is to create a ChromedAttribute which adds a flag to the current HttpContext to indicate that the controller designated Chromed like this: 1: [Chromed] 2: public class HomeController : Controller The attribute itself is an MVC ActionFilter attribute: 1: public class ChromedAttribute : ActionFilterAttribute 2: { 3: public override void OnActionExecuting(ActionExecutingContext filterContext) 4: { 5: var chromedAttribute = GetChromedAttribute(filterContext.ActionDescriptor); 6: if (chromedAttribute != null) 7: { 8: filterContext.HttpContext.Items[typeof(ChromedAttribute)] = null; 9: } 10: } 11:   12: public static bool IsApplied(RequestContext context) 13: { 14: return context.HttpContext.Items.Contains(typeof(ChromedAttribute)); 15: } 16:   17: private static ChromedAttribute GetChromedAttribute(ActionDescriptor descriptor) 18: { 19: return descriptor.GetCustomAttributes(typeof(ChromedAttribute), true) 20: .Concat(descriptor.ControllerDescriptor.GetCustomAttributes(typeof(ChromedAttribute), true)) 21: .OfType<ChromedAttribute>() 22: .FirstOrDefault(); 23: } 24: } With that in place, we only have to override the FindView() method of the custom view engine with these 6 lines of code: 1: public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) 2: { 3: if (ChromedAttribute.IsApplied(controllerContext.RequestContext)) 4: { 5: var bodyView = ViewEngines.Engines.FindPartialView(controllerContext, viewName); 6: var documentView = ViewEngines.Engines.FindPartialView(controllerContext, "Chrome"); 7: var chromeView = new ChromeView(bodyView, documentView); 8: return new ViewEngineResult(chromeView, this); 9: } 10:   11: // Just execute normally without applying Chromed View Engine 12: return base.FindView(controllerContext, viewName, masterName, useCache); 13: } If the view engine finds the [Chromed] attribute, it will invoke it’s own process – otherwise, it’ll just defer to the normal web forms view engine (with masterpages). The ChromeView’s primary job is to independently set the BodyContent on the view context so that it can be rendered at the appropriate place: 1: public class ChromeView : IView 2: { 3: private ViewEngineResult bodyView; 4: private ViewEngineResult documentView; 5:   6: public ChromeView(ViewEngineResult bodyView, ViewEngineResult documentView) 7: { 8: this.bodyView = bodyView; 9: this.documentView = documentView; 10: } 11:   12: public void Render(ViewContext viewContext, System.IO.TextWriter writer) 13: { 14: ChromeViewContext chromeViewContext = ChromeViewContext.From(viewContext); 15:   16: // First render the Body view to the BodyContent 17: using (var bodyViewWriter = new StringWriter()) 18: { 19: var bodyViewContext = new ViewContext(viewContext, bodyView.View, viewContext.ViewData, viewContext.TempData, bodyViewWriter); 20: this.bodyView.View.Render(bodyViewContext, bodyViewWriter); 21: chromeViewContext.BodyContent = bodyViewWriter.ToString(); 22: } 23: // Now render the Document view 24: this.documentView.View.Render(viewContext, writer); 25: } 26: } The ChromeViewContext (code excluded here) mainly just has a string property for the “BodyContent” – but it also makes sure to put itself in the HttpContext so it’s available. Finally, we created a little extension method so the module’s view can be rendered in the appropriate place: 1: public static void RenderBody(this HtmlHelper htmlHelper) 2: { 3: ChromeViewContext chromeViewContext = ChromeViewContext.From(htmlHelper.ViewContext); 4: htmlHelper.ViewContext.Writer.Write(chromeViewContext.BodyContent); 5: } At this point, the other thing left is to decide how we want to implement the Chrome.aspx page. One approach is the copy/paste the HTML from the typical Site.Master and change the main content placeholder to use the HTML helper above – this way, there are no MasterPages anywhere. Alternatively, we could even have Chrome.aspx utilize the MasterPage if we wanted (e.g., in the case where some pages are Chromed and some pages want to use traditional MasterPage): 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> 2: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 3: <% Html.RenderBody(); %> 4: </asp:Content> At this point, it’s all academic. I can create a controller like this: 1: [Chromed] 2: public class WidgetController : Controller 3: { 4: public ActionResult Index() 5: { 6: return View(); 7: } 8: } Then I’ll just create Index.ascx (a partial view) and put in the text “Inside my widget”. Now when I run the app, I can request the full route (notice the controller name of “widget” in the address bar below) and the HTML from my Index.ascx will just appear where it is supposed to.   This means no more warnings for missing MasterPages and no more need for your module to have knowledge of the host’s MasterPage placeholders. You have the option of using the Chrome.aspx in the host or providing your own while embedding it as an embedded resource itself. I’m curious to know what people think of this approach. The code above was done with my own local copy of MvcContrib so it’s not currently something you can download. At this point, these are just my initial thoughts – just incorporating some ideas for Orchard into non-Orchard apps to enable building modular/composite apps more easily. Additionally, on the flip side, I still believe that Portable Areas have potential as the module packaging story for Orchard itself.   What do you think?

    Read the article

  • PASS: Election Changes for 2011

    - by Bill Graziano
    Last year after the election, the PASS Board created an Election Review Committee.  This group was charged with reviewing our election procedures and making suggestions to improve the process.  You can read about the formation of the group and review some of the intermediate work on the site – especially in the forums. I was one of the members of the group along with Joe Webb (Chair), Lori Edwards, Brian Kelley, Wendy Pastrick, Andy Warren and Allen White.  This group worked from October to April on our election process.  Along the way we: Interviewed interested parties including former NomCom members, Board candidates and anyone else that came forward. Held a session at the Summit to allow interested parties to discuss the issues Had numerous conference calls and worked through the various topics I can’t thank these people enough for the work they did.  They invested a tremendous number of hours thinking, talking and writing about our elections.  I’m proud to say I was a member of this group and thoroughly enjoyed working with everyone (even if I did finally get tired of all the calls.) The ERC delivered their recommendations to the PASS Board prior to our May Board meeting.  We reviewed those and made a few modifications.  I took their recommendations and rewrote them as procedures while incorporating those changes.  Their original recommendations as well as our final document are posted at the ERC documents page.  Please take a second and read them BEFORE we start the elections.  If you have any questions please post them in the forums on the ERC site. (My final document includes a change log at the end that I decided to leave in.  If you want to know which areas to pay special attention to that’s a good start.) Many of those recommendations were already posted in the forums or in the blogs of individual ERC members.  Hopefully nothing in the ERC document is too surprising. In this post I’m going to walk through some of the key changes and talk about what I remember from both ERC and Board discussions.  I’ll pay a little extra attention to things the Board changed from the ERC.  I’d also encourage any of the Board or ERC members to blog their thoughts on this. The Nominating Committee will continue to exist.  Personally, I was curious to see what the non-Board ERC members would think about the NomCom.  There was broad agreement that a group to vet candidates had value to the organization. The NomCom will be composed of five members.  Two will be Board members and three will be from the membership at large.  The only requirement for the three community members is that you’ve volunteered in some way (and volunteering is defined very broadly).  We expect potential at-large NomCom members to participate in a forum on the PASS site to answer questions from the other PASS members. We’re going to hold an election to determine the three community members.  It will be closer to voting for Summit sessions than voting for Board members.  That means there won’t be multiple dedicated emails.  If you’re at all paying attention it will be easy to participate.  Personally I wanted it easy for those that cared to participate but not overwhelm those that didn’t care.  I think this strikes a good balance. There’s also a clause that in order to be considered a winner in this NomCom election, you must receive 10 votes.  This is something I suggested.  I have no idea how popular the NomCom election is going to be.  I just wanted a fallback that if no one participated and some random person got in with one or two votes.  Any open slots will be filled by the NomCom chair (usually the PASS Immediate Past President).  My assumption is that they would probably take the next highest vote getters unless they were throwing flames in the forums or clearly unqualified.  As a final check, the Board still approves the final NomCom. The NomCom is going to rank candidates instead of rating them.  This has interesting implications.  This was championed by another ERC member and I’m hoping they write something about it.  This will really force the NomCom to make decisions between candidates.  You can’t just rate everyone a 3 and be done with it.  It may also make candidates appear further apart than they actually are.  I’m looking forward talking with the NomCom after this election and getting their feedback on this. The PASS Board added an option to remove a candidate with a unanimous vote of the NomCom.  This was primarily put in place to handle people that lied on their application or had a criminal background or some other unusual situation and we figured it out. We list an explicit goal of three candidate per open slot. We also wanted an easy way to find the NomCom candidate rankings from the ballot.  Hopefully this will satisfy those that want a broad candidate pool and those that want the NomCom to identify the most qualified candidates. The primary spokesperson for the NomCom is the committee chair.  After the issues around the election last year we didn’t have a good communication plan in place.  We should have and that was a failure on the part of the Board.  If there is criticism of the election this year I hope that falls squarely on the Board.  The community members of the NomCom shouldn’t be fielding complaints over the election process.  That said, the NomCom is ranking candidates and we are forcing them to rank some lower than others.  I’m sure you’ll each find someone that you think should have been ranked differently.  I also want to highlight one other change to the process that we started last year and isn’t included in these documents.  I think the candidate forums on the PASS site were tremendously helpful last year in helping people to find out more about candidates.  That gives our members a way to ask hard questions of the candidates and publicly see their answers. This year we have two important groups to fill.  The first is the NomCom.  We need three people from our membership to step up and fill this role.  It won’t be easy.  You will have to make subjective rankings of your fellow community members.  Your actions will be important in deciding who the future leaders of PASS will be.  There’s a 50/50 chance that one of the people you interview will be the President of PASS someday.  This is not a responsibility to be taken lightly. The second is the slate of candidates.  If you’ve ever thought about running for the Board this is the year.  We’ve never had nine candidates on the ballot before.  Your chance of making it through the NomCom are higher than in any previous year.  Unfortunately the more of you that run, the more of you that will lose in the election.  And hopefully that competition will mean more community involvement and better Board members for PASS. Is this the end of changes to the election process?  It isn’t.  Every year that I’ve been on the Board the election process has changed.  Some years there have been small changes and some years there have been large changes.  After this election we’ll look at how the process worked and decide what steps to take – just like we do every year.

    Read the article

  • Expectations + Rewards = Innovation

    - by D'Arcy Lussier
    “Innovation” is a heavy word. We regard those that embrace it as “Innovators”. We describe organizations as being “Innovative”. We hold those associated with the word in high regard, even though its dictionary definition is very simple: Introducing something new. What our culture has done is wrapped Innovation in white robes and a gold crown. Innovation is rarely just introducing something new. Innovations and innovators are typically associated with other terms: groundbreaking, genius, industry-changing, creative, leading. Being a true innovator and creating innovations are a big deal, and something companies try to strive for…or at least say they strive for. There’s huge value in being recognized as an innovator in an industry, since the idea is that innovation equates to increased profitability. IBM ran an ad a few years back that showed what their view of innovation is: “The point of innovation is to make actual money.” If the money aspect makes you feel uneasy, consider it another way: the point of innovation is to <insert payoff here>. Companies that innovate will be more successful. Non-profits that innovate can better serve their target clients. Governments that innovate can better provide services to their citizens. True innovation is not easy to come by though. As with anything in business, how well an organization will innovate is reliant on the employees it retains, the expectations placed on those employees, and the rewards available to them. In a previous blog post I talked about one formula: Right Employees + Happy Employees = Productive Employees I want to introduce a new one, that builds upon the previous one: Expectations + Rewards = Innovation  The level of innovation your organization will realize is directly associated with the expectations you place on your staff and the rewards you make available to them. Expectations We may feel uncomfortable with the idea of placing expectations on our staff, mainly because expectation has somewhat of a negative or cold connotation to it: “I expect you to act this way or else!” The problem is in the or-else part…we focus on the negative aspects of failing to meet expectations instead of looking at the positive side. “I expect you to act this way because it will produce <insert benefit here>”. Expectations should not be set to punish but instead be set to ensure quality. At a recent conference I spoke with some Microsoft employees who told me that you have five years from starting with the company to reach a “Senior” level. If you don’t, then you’re let go. The expectation Microsoft placed on their staff is that they should be working towards improving themselves, taking more responsibility, and thus ensure that there is a constant level of quality in the workforce. Rewards Let me be clear: a paycheck is not a reward. A paycheck is simply the employer’s responsibility in the employee/employer relationship. A paycheck will never be the key motivator to drive innovation. Offering employees something over and above their required compensation can spur them to greater performance and achievement. Working in the food service industry, this tactic was used again and again: whoever has the highest sales over lunch will receive a free lunch/gift certificate/entry into a draw/etc. There was something to strive for, to try beyond the baseline of what our serving jobs were. It was through this that innovative sales techniques would be tried and honed, with key servers being top sellers time and time again. At a code camp I spoke at, I was amazed to see that all the employees from one company receive $100 Visa gift cards as a thank you for taking time to speak. Again, offering something over and above that can give that extra push for employees. Rewards work. But what about the fairness angle? In the restaurant example I gave, there were servers that would never win the competition. They just weren’t good enough at selling and never seemed to get better. So should those that did work at performing better and produce more sales for the restaurant not get rewarded because those who weren’t working at performing better might get upset? Of course not! Organizations succeed because of their top performers and those that strive to join their ranks. The Expectation/Reward Graph While the Expectations + Rewards = Innovation formula may seem like a simple mathematics formula, there’s much more going under the hood. In fact there are three different outcomes that could occur based on what you put in as values for Expectations and Rewards. Consider the graph below and the descriptions that follow: Disgruntled – High Expectation, Low Reward I worked at a company where the mantra was “Company First, Because We Pay You”. Even today I still hear stories of how this sentiment continues to be perpetuated: They provide you a paycheck and a means to live, therefore you should always put them as your top priority. Of course, this is a huge imbalance in the expectation/reward equation. Why would anyone willingly meet high expectations of availability, workload, deadlines, etc. when there is no reward other than a paycheck to show for it? Remember: paychecks are not rewards! Instead, you see employees be disgruntled which not only affects the level of production but also the level of quality within an organization. It also means that you see higher turnover. Complacent – Low Expectation, Low Reward Complacency is a systemic problem that typically exists throughout all levels of an organization. With no real expectations or rewards, nobody needs to excel. In fact, those that do try to innovate, improve, or introduce new things into the organization might be shunned or pushed out by the rest of the staff who are just doing things the same way they’ve always done it. The bigger issue for the organization with low/low values is that at best they’ll never grow beyond their current size (and may shrink actually), and at worst will cease to exist. Entitled – Low Expectation, High Reward It’s one thing to say you have the best people and reward them as such, but its another thing to actually have the best people and reward them as such. Organizations with Entitled employees are the former: their organization provides them with all types of comforts, benefits, and perks. But there’s no requirement before the rewards are dolled out, and there’s no short-list of who receives the rewards. Everyone in the company is treated the same and is given equal share of the spoils. Entitlement is actually almost identical with Complacency with one notable difference: just try to introduce higher expectations into an entitled organization! Entitled employees have been spoiled for so long that they can’t fathom having rewards taken from them, or having to achieve specific levels of performance before attaining them. Those running the organization also buy in to the Entitled sentiment, feeling that they must persist the same level of comforts to appease their staff…even though the quality of the employee pool may be suspect. Innovative – High Expectation, High Reward Finally we have the Innovative organization which places high expectations but also provides high rewards. This organization gets it: if you truly want the best employees you need to apply equal doses of pressure and praise. Realize that I’m not suggesting crazy overtime or un-realistic working conditions. I do not agree with the “Glengary-Glenross” method of encouragement. But as anyone who follows sports can tell you, the teams that win are the ones where the coaches push their players to be their best; to achieve new levels of performance that they didn’t know they could receive. And the result for the players is more money, fame, and opportunity. It’s in this environment that organizations can focus on innovation – true innovation that builds the business and allows everyone involved to truly benefit. In Closing Organizations love to use the word “Innovation” and its derivatives, but very few actually do innovate. For many, the term has just become another marketing buzzword to lump in with all the other business terms that get overused. But for those organizations that truly get the value of innovation, they will be the ones surging forward while other companies simply fade into the background. And they will be the organizations that expect more from their employees, and give them their just rewards.

    Read the article

  • Emtel Knowledge Series - Q2/2014

    From Cyber Island to Smart Mauritius Cyber Island? Smart Mauritius? - What is Emtel talking about? "With the majority of the population living in urban environments today, the concept of "Smart Cities" has become an urgent necessity. "Smart Cities" refer to an urban transformation which, by using latest ICT technologies makes cities more efficient. Many Governments are setting out ambitious plans to build the cities of the future based on massive connectivity, high bandwidth communications, intelligent sensors and analysis of huge volumes of data. Various researches have shown four key enablers for smart city success - Government leadership, suitable technology infrastructure, solid public-private partnerships and engaged citizens. It is around these enabling factors that telecoms companies can play a vital role in assisting governments to deliver on the smart city vision." The Emtel Knowledge Series goes in compliance with Emtel's 25th anniversary celebrations throughout the year and the master of ceremony, Kim Andersen, mentioned that there will be more upcoming events on a quarterly base. As a representative of the Mauritius Software Craftsmanship Community (MSCC) there was absolutely no hesitation to join in again. Following my visit to the first Emtel Knowledge Series workshop back in February this year, it was great to have another opportunity to meet and exchange with technology experts. But quite frankly what is it with those buzz words... As far as I remember and how it was mentioned "Cyber Island" is an old initiative from around 2005/2006 which has been refreshed in 2010. It implies the empowerment of Information & Communication Technologies (ICT) as an essential factor of growth by the government here in Mauritius. Actually, the first promotional period of Cyber Island brought me here but that's another story. The venue and its own problems Like last time the event was organised and held at the Conference Hall at Cyber Tower I in Ebene. As I've been working there for some years, I know about the frustrating situation of finding a proper parking. So, does Smart Island include better solutions for the search of parking spaces? Maybe, let's see whether I will be able to answer that question at the end of the article. Anyway, after circling around the tower almost two times, I finally got a decent space to put the car, without risking to get a ticket or damage actually. International speakers and their experience Once again, Emtel did a great job to get international expertise onto the stage to share their experience and vision on this kind of embarkment. Personally, I really appreciated the fact they were speakers of global reach and could provide own-experience knowledge. Johan Gott spoke about the fundamental change that the Swedish government ignited in order to move their society and workers' environment away from heavy industry towards a knowledge-based approach. Additionally, we spoke about the effort and transformation of New York City into a greener and more efficient Smart City. Given modern technology he also advised that any kind of available Big Data should be opened to the general public - this openness would provide a playground for anyone to garner new ideas and most probably solid solutions of which no one else thought about before. Emtel Knowledge Series on moving from Cyber Island to Smart Mauritus Later during the afternoon that exact statement regarding openness to and transparency of government-owned Big Data has been emphasised again by the Danish speaker Kim Andersen and his former colleague Mika Jantunen from Finland. Mika continued to underline the important role of the government to provide a solid foundation for a knowledge-based society and mentioned that Finnish citizens have a constitutional right to broadband connectivity. Next to free higher (tertiary) education Finland already produced a good number of innovations, among them are: First country to grant voting rights to women Free higher education Constitutional right to broadband connectivity Nokia Linux Angry Birds Sauna and others...  General access to internet via broadband and/or mobile connectivity is surely a key factor towards Smart Cities, or better said Smart Mauritius given the area dimensions and size of population. CTO Paul Valette gave the audience a brief overview of the essential role that Emtel will have to move Mauritius forward towards a knowledge-based and innovation-driven environment for its citizen. What I have seen looks really promising and with recently published information that Mauritians have 127% of mobile capacity - meaning more than 1 mobile, smartphone or tablet per person - it will be crucial to have the right infrastructure for these connected devices. How would it be possible to achieve a knowledge-based society? YouTube to the rescue!Seriously, gaining more knowledge will require to have fast access to educational course material as explained by Dr Kaviraj Sukon, General Director of the Open University of Mauritius. According to him a good number of high-profile universities in the world have opened their course libraries to the general public, among them EDX, Coursera and Open University. Nowadays, you're actually able and enabled to learn for and earn a BSc or even MSc certification on your own pace - no need to attend classed on campus. It was really impressive to see the number of available hours - more than enough for a life-long learning experience! {loadposition content_adsense} Networking in the name of MSCC As briefly mentioned above I was about to combine two approaches for this workshop. Of course, getting latest information and updates on Emtel services available, especially for my business here on the west coast of the island, but also to meet and greet new people for the MSCC. And I think it was very positive on both sides. Let me quickly describe some of the key aspects that happened during the day: Met with Arnaud Meslier and Kellie, both Microsoft to swap latest information on IT events. Hereby, I got an invite to Microsoft Windows Phone 8.1 Dev Camp. Got in touch with Arvin Lockee, Emtel to check our options to meet with the data team, and seizing the opportunity to have a visiting tour at the Emtel Data Centre. Had a great chat with Avinash Meetoo, Knowledge 7, Kim Andersen and Mika Jantunen about the situation of teaching and learning in general and specifically in the private sector here in Mauritius. Additionally, a number of various other interesting chats... Once again, I'm catching up on a couple of business cards in order to provide more background information about the MSCC, and to create a better awareness of MSCC within the local IT businesses. There is more to come soon!  Resume of the day The number of attendees during this event has been doubled or even tripled this time. The whole organisation has been improved massively and the combination of presentation and summarizing panel discussions was better than during the previous workshop back in February. Overall, once again a well-organised workshop and I'm already looking forward to join the next workshop in Q3. Update End of July we finally managed to visit the Emtel Data Centre in Arsenal. It was an interesting opportunity for some of our MSCC members.

    Read the article

  • DevConnections Spring 2010 Speaker Evals and Tips

    As a conference speaker, I always look forward to hearing from attendees whether they felt my sessions were valuable and worth their time.  Its always gratifying  get a high score, but of course its the (preferably constructive) criticism thats key to continued improvement.  Im by no means the best technical presenter around, and Im always looking for ways to improve. Ive recently spoken at a few events, including TechEd and an Ohio event called Stir Trek.  DevConnections was actually back in April, but theyre just getting their final evals out to speakers.  TechEd, of course, does online evals so immediately after your talks you can see what people think.  Ill try and post my TechEd evals in the next week or so. I gave 3 talks at DevConnections Spring 2010 / VS2010 Launch which I discussed in this previous blog post.  In this follow-up, Im just going to share some eval info and my thoughts on it, albeit a couple of months later. Pragmatic ASP.NET Tips, Tricks, and Tools Evals Turned In: 27 Overall Eval: 3.74 Average Score: 3.47 89% found the technical level Just Right.  7.4% thought it was too basic (3.6% did not respond).  Since nobody thought the content was Too complex, I could perhaps have added some more complex material, but having about 90% say its Just Right is pretty good. 92% said at least 50% of the material was new to them.  36% said 75% or more was new.  Thats also pretty good, I think. 77.8% can use the information immediately; 15% can use it within 2-6 months (7.2 % no response). Overall 78% rated the session Excellent, 18% Good, 4% Fair. All comments (9): Steve did a great job Excellent session! It was good. Im now super excited to attend Steves other sessions later today.  Very useful. One of the best speakers here.  Bring him back to future conferences please. Continue to have this session with new and old stuff.  I always find something I did not know about. Excellent!  This was the best session Ive seen all week. Did not increase font on all pages could not see. For Steve to have had more sessions. Note to self make the fonts bigger across the board.  Otherwise, this is all good for my ego. :)  This is always a very popular session and one I really enjoy giving.  Tips and Tricks talks are pretty easy because you dont have to go in depth with any particular thing, and theyre almost always with existing technology so youre not dealing with betas, lack of documentation, and other issues.  Its an easy session to do well, in my experience, and one which I think attendees definitely appreciate.   Whats New in ASP.NET MVC 2 Evals Turned In: 23 Overall Eval: 3.77 Average Score: 3.47 (wow, I cant believe I scored better on this talk than the tips and tricks talk, which Ive given many times and was more excited about) 96% found the technical level Just Right.  90% found 50% or more of the material to be new.  43% can use the info immediately, and another 43% can use it within 2-6 months I guess that speaks to adoption rates of MVC 2 among my attendees Overall 74% said the session was Excellent, 22% Good.  4% No Response. All Comments (6): Great job, thank you. Great speaker! Really good, a little lost in the code at some points, but great information. Speaker needs to repeat questions from audience for everyone to hear. Exceeded my expectations. Great speaker, very informative. I really do try to religiously repeat questions from the audience for everyone to hear, but obviously I didnt do it 100% of the time.  Note to self remember to repeat questions.  That and making fonts big are really basic speaker best practices, which just goes to prove that fundamentals are always something that can be perfected.   SOLIDify Your ASP.NET MVC 2 Application Evals Turned In: 8 (!) Overall Eval: 3.63 Average Score: 3.47 As I recall this was one of the last talks of the day / show, which might account for the low number of evals turned in.  I dont recall speaking to an empty room for this talk, although it certainly wasnt as crowded as the tips and tricks talk. 100% found the technical level Just Right.  100% found at least half the material new.  62.5% can use it at once and 37.5% within 2-6 months.  62.5% rated the session Excellent overall; 37.5% Good.  Im thinking there were 5 evals with all 4s checked and 3 with all 3s checked (4 = Excellent, 3 = Good) All Comments (3): This covered many topics Ive read about recently, and it helped reinforce them. It was a nice overview of the solid principle, but I thought there might be specifics for MVC2.  I am glad there is not. Move a little slower. Ok, so another fundamental dont go too fast.  Looks like I got one fundamental tip from the comments of each talk. My Take-Aways Remember the fundamentals.  Its worth going through a checklist prior to presenting to make sure these things are fresh in your mind.  Increase all font sizes.  Repeat all questions from audience members without microphones (this is also a great way to stall for time, btw).  Resist the urge to move too quickly especially if youre nervous or short of time.  Writing this up in a blog post also further reinforces these fundamentals for me, which is one of the main reasons why I do it I retain things better when I write them, and even moreso when I write them for public consumption since I have to really think about what Im saying.  And maybe a few of you find this interesting or helpful, which is a bonus. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • Data Formatters temporarily unavailable

    - by iphone newbie
    I'm currently working on an iPhone app. This app has a login screen, also a signup screen. After the user has successfully signed up, I dismiss the signup view, then the app automatically logs in using the created account. After which, the login view is dismissed, showing the main view. I'm trying to modify this by immediately dismissing the login view, since I already have the account details of the user when the signup is successful. Basically, the ideal flow is: after the user successfully signs up, I save the username and password in a singleton class, then dismiss the signup view. When I get to the parent view (which is the login screen), I have a variable that checks if there was a successful signup. If that variable is true, I want to immediately dismiss the login view. However, I come across this error message: Data Formatters temporarily unavailable, will re-try after a 'continue'. (Unknown error loading shared library "/Developer/usr/lib/libXcodeDebuggerSupport.dylib") I'm not really sure why this happens. I have no problems dismissing the login view when I go through the actual login procedure - which of course also dismisses the login view if the user inputs a correct username and password. I'm not exactly sure, but I'm starting to think that the iPhone cannot handle dismissing 2 view controllers almost at the same time. Is it possible that I'm dismissing the login view too quickly? Is that a factor? Is there anyway for me to be able to dismiss 2 view controllers almost simultaneously without coming across this error message?

    Read the article

  • Cannot import the following keyfile: blah.pfx. The keyfile may be password protected.

    - by JasonD
    We just upgraded our Visual Studio 2008 projects to VS2010. All of our assemblies were strong signed using a Verisign code signing certificate. Since the upgrade we continuously get the following error: Cannot import the following key file: companyname.pfx. The key file may be password protected. To correct this, try to import the certificate again or manually install the certificate to the Strong Name CSP with the following key container name: VS_KEY_3E185446540E7F7A This happens on some developer machines and not others. Some methods used to fix this that worked some of the time include: re-installing the key file from Windows Explorer (right click on the PFX file and click Install) installing VS2010 on a fresh machine for the first time prompts you for the password the first time you open the project, and then it works. On machines upgraded from VS2008, you don't get this option. I've tried using the SN.EXE utility to register the key with the Strong Name CSP as the error message suggests, but whenever I run the tool with any options using the version that came with VS2010, SN.EXE just lists its command line arguments instead of doing anything. This happens regardless of what arguments I supply. Does anyone know WHY this is happening, and have clear steps to fix it? I'm about to give up on Click Once installs and Microsoft Code Signing. Thanks for any help!

    Read the article

  • HintPath vs ReferencePath in Visual Studio

    - by toasteroven
    What exactly is the difference between the HintPath in a .csproj file and the ReferencePath in a .csproj.user file? We're trying to commit to a convention where dependency DLLs are in a "releases" svn repo and all projects point to a particular release. Since different developers have different folder structures, relative references won't work, so we came up with a scheme to use an environment variable pointing to the particular developer's releases folder to create an absolute reference. So after a reference is added, we manually edit the project file to change the reference to an absolute path using the environment variable. I've noticed that this can be done with both the HintPath and the ReferencePath, but the only difference I could find between them is that HintPath is resolved at build-time and ReferencePath when the project is loaded into the IDE. I'm not really sure what the ramifications of that are though. I have noticed that VS sometimes rewrites the .csproj.user and I have to rewrite the ReferencePath, but I'm not sure what triggers that. I've heard that it's best not to check in the .csproj.user file since it's user-specific, so I'd like to aim for that, but I've also heard that the HintPath-specified DLL isn't "guaranteed" to be loaded if the same DLL is e.g. located in the project's output directory. Any thoughts on this?

    Read the article

  • Cannot convert lambda expression to type 'string' because it is not a delegate type

    - by RememberME
    I have the following code written by another developer on 2 pages of my site. This used to work just fine, but now is giving the error "Cannot convert lambda expression to type 'string' because it is not a delegate type" on the Delete line with Ajax.ThemeRollerActionLink. I don't go into this section of the site often, and we recently upgraded from MVC 1.0 to 2.0. I'm guessing that's probably when it stopped working. I've looked up this error and the recommended fix seems to be add using System.Linq However, the page already has <%@ Import Namespace="System.Linq" %> <% Html.Grid(Model).Columns(col => { col.For(c => "<a href='" + Url.Action("Edit", new { userName = c }) + "' class=\"fg-button fg-button-icon-solo ui-state-default ui-corner-all\"><span class=\"ui-icon ui-icon-pencil\"></span></a>").Named("Edit").DoNotEncode(); col.For(c => Ajax.ThemeRollerActionLink("fg-button fg-button-icon-solo ui-state-default ui-corner-all", "ui-icon ui-icon-close", "Delete", new { userName = c }, new AjaxOptions { Confirm = "Delete User?", HttpMethod = "Delete", InsertionMode = InsertionMode.Replace, UpdateTargetId = "gridcontainer", OnSuccess = "successDeleteAssignment", OnFailure = "failureDeleteAssignment" })).Named("Delete").DoNotEncode(); col.For(c => c).Named("User"); }).Attributes(id => "userlist").Render(); %>

    Read the article

  • Using Perl WWW::Facebook::API to Publish To Facebook Newsfeed

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • Android beginner: Touch events in android gridview

    - by jja
    I am using the following code to do things with gridview(slightly modified from http://developer.android.com/resources/tutorials/views/hello-gridview.html). I want to replace the onClicklistener and the onClick() method with their "touch" equivalents i.e. touchlistener and onTouch() so that when i touch an element in the gridview the image of the element changes and a double touch on the same element takes it back to the orginal state. How do I do this? I can't get my code to do this. The clicklistener works to some extent but the touch isn't. Please help. public class ImageAdapter extends BaseAdapter { private Context mContext; public ImageAdapter(Context c) { mContext = c; } public int getCount() { return mThumbIds.length; } public Object getItem(int position) { return null; } public long getItemId(int position) { return 0; } // create a new ImageView for each item referenced by the Adapter public View getView(int position, View convertView, ViewGroup parent) { ImageView imageView; if (convertView == null) { // if it's not recycled, initialize some attributes imageView = new ImageView(mContext); imageView.setLayoutParams(new GridView.LayoutParams(85, 85)); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); imageView.setPadding(8, 8, 8, 8); imageView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { if(position==0) { //do this } else { //do this } } }); } else { imageView = (ImageView) convertView; } imageView.setImageResource(mThumbIds[position]); return imageView; } // references to our images private Integer[] mThumbIds = { R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7, R.drawable.sample_0, R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7, R.drawable.sample_0, R.drawable.sample_1, R.drawable.sample_2, R.drawable.sample_3, R.drawable.sample_4, R.drawable.sample_5, R.drawable.sample_6, R.drawable.sample_7 }; }

    Read the article

  • Code sign error with Xcode 3.2

    - by quux
    I had a fully working build environment before upgrading to iPhone OS 3.1 and Xcode 3.2. Now when I try to do a build, i get the following: Code Sign error: Provisioning profile 'FooApp test' specifies the Application Identifier 'no.fooapp.iphoneapp' which doesn't match the current setting 'TGECMYZ3VK.no.fooapp.iphoneapp' The problem is that Xcode somehow manages to think that the "FooApp Test" provisioning profile specifies the Application Identifier "no.fooapp.iphoneapp", but this is not the case. In the Organizer (and in the iPhone developer portal website) the app identifier is correctly seen as 'TGECMYZ3VK.no.fooapp.iphoneapp'. Also, when setting the provisioning profile in the build options at the project level, Xcode correctly identifies the app identifier, but when I go to the target, I'm unable to select any valid provisioning profile. What could be causing this problem? Update: I've tried to create a new provisioning profile, but still no luck. I also tried simply changing the app identified in Info.plist to just "no.fooapp.iphoneapp". The build succeeds, but now I get an error from the Organizer: The executable was signed with invalid entitlements. The entitlements specified in your application's Code Signing Entitlements file do not match those specified in your provisioning profile. (0xE8008016). This seems reasonable, as the provisioning profile still has the "TGECMYZ3VK.no.fooapp.iphoneapp" application identifier. I also double checked that all certiicates are valid in the Keychain. So my question is how I can get Xcode to see the correct application identifier? UPDATE: As noted below, what seems to fix the problem is deleting all provisioning profiles, certificates etc, making new certificates / profiles and installing them again. If anyone has any other solutions, they would be welcome. :)

    Read the article

  • Java and AppStore receipt verification

    - by user1672461
    I am trying to verify a payment receipt on server side. I am getting a {"status":21002, "exception":"java.lang.IllegalArgumentException"} in return Here is the code: private final static String _sandboxUriStr = "https://sandbox.itunes.apple.com/verifyReceipt"; public static void processPayment(final String receipt) throws SystemException { final BASE64Encoder encoder = new BASE64Encoder(); final String receiptData = encoder.encode(receipt.getBytes()); final String jsonData = "{\"receipt-data\" : \"" + receiptData + "\"}"; System.out.println(receipt); System.out.println(jsonData); try { final URL url = new URL(_productionUriStr); final HttpURLConnection conn = (HttpsURLConnection) url.openConnection(); conn.setRequestMethod("POST"); conn.setDoOutput(true); final OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream()); wr.write(jsonData); wr.flush(); // Get the response final BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream())); String line; while ((line = rd.readLine()) != null) { System.out.println(line); } wr.close(); rd.close(); } catch (IOException e) { throw new SystemException("Error when trying to send request to '%s', %s", _sandboxUriStr, e.getMessage()); } } My receipt looks like this: {\n\t"signature" = "[exactly_1320_characters]";\n\t"purchase-info" = "[exactly_868_characters]";\n\t"environment" = "Sandbox";\n\t"pod" = "100";\n\t"signing-status" = "0";\n} Receipt data with a BASE64 encoded receipt looks like this: Blockquote {"receipt-data" : "[Block_of_chars_76x40+44=3084_chars_total]"} Does someone have an Idea, or sample code how can I get from receipt string to reply JSON, mentioned here: [http://developer.apple.com/library/ios/#documentation/NetworkingInternet/Conceptual/StoreKitGuide/VerifyingStoreReceipts/VerifyingStoreReceipts.html#//apple_ref/doc/uid/TP40008267-CH104-SW1]? Thank you

    Read the article

  • The SaveAs method is configured to require a rooted path, and the path <blah> is not rooted.

    - by Jen
    OK I've seen a few people with this issue - but I'm already using a file path, not a relative path. My code works fine on my PC, but when another developer goes to upload an image they get this error. I thought it was a security permission thing on the folder - but the system account has full access to the folder (though I get confused about how to test which account the application is running under). Also usually running locally doesn't often give you too many security issues :) A code snippet: Guid fileIdentifier = Guid.NewGuid(); CurrentUserInfo currentUser = CMSContext.CurrentUser; string identifier = currentUser.UserName.Replace(" ", "") + "_" + fileIdentifier.ToString(); string fileName1 = System.IO.Path.GetFileName(fileUpload.PostedFile.FileName); string Name = System.IO.Path.GetFileNameWithoutExtension(fileName1); string renamedFile = fileName1.Replace(Name, identifier); string path = ConfigurationManager.AppSettings["MemberPhotoRepository"]; String filenameToWriteTo = path + currentUser.UserName.Replace(" ", "") + fileName1; fileUpload.PostedFile.SaveAs(filenameToWriteTo); Where my config settings value is: Again this works fine on my PC! (And I checked the other person has the folder on their PC). Any suggestions would be appreciated - thanks :)

    Read the article

  • Greasemonkey @require jQuery not working "Component not available"

    - by Greg K
    I've seen the other question on here about loading jQuery in a Greasemonkey. Having tried that method, with this require statement inside my ==UserScript== tags: // @require http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js I still get the following error message in Firefox's error console: Error: Component is not available Source File: file:///Users/greg/Library/Application%20Support/ Firefox/Profiles/xo9xhovo.default/gm_scripts/myscript/jquerymin.js Line: 36 This stops my greasemonkey code from running. I've made sure I included the @require for jQuery and saved my js file before installing it, as required files are only loaded on installation. Code: // ==UserScript== // @name My Script // @namespace http://www.google.com // @description My test script // @include http://www.google.com // @require http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js // ==/UserScript== GM_log("Hello"); I have Greasemonkey 0.8.20091209.4 installed on Firefox 3.5.7 on my Macbook Pro, Leopard (10.5.8). I've cleared my cache (except cookies) and have disabled all other plugins except Flashblock 1.5.11.2, Web Developer 1.1.8 and Adblock Plus 1.1.3. My config.xml with my Greasemonkey script installed: <UserScriptConfig> <Script filename="myscript.user.js" name="My Script" namespace="http://www.google.com" description="My test script" enabled="true" basedir="myscript"> <Include>http://www.google.com</Include> <Require filename="jquerymin.js"/> </Script> I can see jquerymin.js sat in the gm_scripts/myscript/ directory. Additionally, is it common for this error to occur in the console when installing a Greasemonkey script? Error: not well-formed Source File: file:///Users/Greg/Documents/myscript.user.js Line: 1, Column: 1 Source Code: // ==UserScript==

    Read the article

  • Exception from HRESULT: 0x80020009 (DISP_E_EXCEPTION)) in SharePoint

    - by BeraCim
    Hi all: After googling for many hours for a solution for the above Sharepoint exception, I have come to SO for help on this one... I believe the cause of me getting the above exception is because of the following code: try { using (SPSite site = new SPSite(siteId, spUserToken)) { using (SPWeb web = site.OpenWeb(webId)) { createNewSite(web); } } } createNewSite(web) changes the name and URL of "web" using AllowUnsafeUpdates, so when it comes out of the method it has been changed. My few months worth of Sharepoint developing experience suggest that that is the cause of the exception. "web" is no longer used anymore so I can comfortably null it myself. The problem here is... it didnt work: try { using (SPSite site = new SPSite(siteId, spUserToken)) { SPWeb web = null; using (web = site.OpenWeb(webId)) { createNewSite(web); if (web != null) { web = null; } } } } I believe that the original developer used the using declaration to avoid SPWeb objects from leaking. Asides that I think it is okay for me to break this pattern solely for the purpose of getting rid of that dreaded exception. So the question: what can I do to the above code to potentially fix this exception? Thanks.

    Read the article

  • iPhone: How to write plist array of dictionary object

    - by Alessandro
    Hello, I'm a young Italian developer for the iPhone. I have a plist file (named "Frase") with this structure: Root Array - Item 0 Dictionary Frase String Preferito Bool - Item 1 Dictionary Frase String Preferito Bool - Item 2 Dictionary Frase String Preferito Bool - Item 3 Dictionary Frase String Preferito Bool exc. An array that contains many elements dictionary, all the same, consisting of "Frase" (string) and "Preferito" (BOOL). The variable "indirizzo", increase or decrease the click of the button Next or Back. The application interface: http://img691.imageshack.us/img691/357/schermata20100418a20331.png When I click on AddPreferito button, the item "Preferito" must be YES. Subsequently, the array must be updated with the new dictionary.The code: (void)addpreferito:(id)sender { NSString *plistPath = [[NSBundle mainBundle] pathForResource:@"Frase" ofType:@"plist"]; MSMutableArray *frase = [[NSMutableArray alloc] initWithContentsOfFile:plistPath]; NSMutableDictionary *dictionary = [frase objectAtIndex:indirizzo]; [dictionary setValue: YES forKey:@"Preferito"]; [frase replaceObjectAtIndex:indirizzo withObject:dictionary]; [frase writeToFile:plistPath atomically:YES]; } Why not work? Thanks Thanks Thanks!

    Read the article

  • What are some good/reputable/widely-used libraries written in VB.NET?

    - by Dan Tao
    Generally speaking, when VB.NET and C# are compared, there is a lot of strong support for C#, accompanied by some bashing of VB.NET until a respected developer comes along and acts as The Voice Of Reason, pointing out that while VB prior to VB.NET had its fair share of issues, VB.NET is really a very strong, fully OOP language that is, feature-wise, right about on par with C# (with the exception of certain things like a full-bodied lamba syntax [pre-VB10] or the yield keyword, as many C# faithfuls are quick to point out). I myself, having written plenty of code in both VB.NET and C#, fall squarely in the "I prefer C#, but don't consider VB.NET any less of a language" camp. However, one thing I have noticed is that when it comes to respected and/or widely-used libraries for .NET, everyting is written in C#. Or at least that's been my impression. This strikes me as a little strange because, aside from the abovementioned sprinkling of nice features (in particular the yield keyword), I tend to view the VB.NET/C# divide as primarily a matter of personal taste. Obviously, plenty of developers prefer C#. But I personally know some developers (good ones) who prefer VB.NET, which would lead me to suspect that surely some libraries (good ones) would be written in VB.NET. They must be out there, and I just haven't found them. What are some good libraries that've been written in VB.NET? The best would be open source, as that would allow interested developers to take a look at some good VB.NET code and see how effective the language can be when used properly. But I'd be interested to know about any libraries at all, particularly reputable ones.

    Read the article

  • Iphone page curl effect

    - by dragon
    I am using this code for Page curl effect ....Its work fine in simulator and device... But its not (setType:@"pageCurl") apple documented api , this caused it to be rejected by the iPhone Developer Program during the App Store review process: animation = [CATransition animation]; [animation setDelegate:self]; [animation setDuration:1.0f]; animation.startProgress = 0.5; animation.endProgress = 1; [animation setTimingFunction:UIViewAnimationCurveEaseInOut]; [animation setType:@"pageCurl"]; [animation setSubtype:@"fromRight"]; [animation setRemovedOnCompletion:NO]; [animation setFillMode: @"extended"]; [animation setRemovedOnCompletion: NO]; [[imageView layer] addAnimation:animation forKey:@"pageFlipAnimation"]; So i changed and using like this [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1]; [UIView setAnimationDelegate:self]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationCurve:UIViewAnimationCurveLinear]; [UIView setAnimationWillStartSelector:@selector(transitionWillStart:finished:context:)]; [UIView setAnimationDidStopSelector:@selector(transitionDidStop:finished:context:)]; // other animation properties [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:imageView cache:YES]; // set view properties [UIView commitAnimations]; In this above code i want to stop the page curl effect at midway.. But i cant stop it in midway like map applications in ipod... Is this any fix for this? or Is there any apple documented methods used for page curl effect in ipod touch? I am searching lot. but didnt get any answer ? can anyone help me? Thanks in advance..plz

    Read the article

  • CSS window height problem with dynamic loaded css

    - by Michael Mao
    Hi all: Please go here and use username "admin" and password "endlesscomic" (without wrapper quotes) to see the web app demo. Basically what I am trying to do is to incrementally integrate my work to this web app, say, every nightly, for the client to check the progress. Also, he would like to see, at the very beginning, a mockup about the page layout. I am trying to use the 960 grid system to achieve this. So far, so good. Except one issue that when the "mockup.css" is loaded dynamically by jQuery, it "extends" the window to the bottom, something I do not wanna have... As an inexperienced web developer, I don't know which part is wrong. Below is my js: /* master.js */ $(document).ready(function() { $('#addDebugCss').click(function() { alertMessage('adding debug css...'); addCssToHead('./css/debug.css'); $('.grid-insider').css('opacity','0.5');//reset mockup background transparcy }); $('#addMockupCss').click(function() { alertMessage('adding mockup css...'); addCssToHead('./css/mockup.css'); $('.grid-insider').css('opacity','1');//set semi-background transparcy for mockup }); $('#resetCss').click(function() { alertMessage('rolling back to normal'); rollbackCss(new Array("./css/mockup.css", "./css/debug.css")); }); }); function alertMessage(msg) //TODO find a better modal prompt { alert(msg); } function addCssToHead(path_to_css) { $('<link rel="stylesheet" type="text/css" href="' + path_to_css + '" />').appendTo("head"); } function rollbackCss(set) { for(var i in set) { $('link[href="'+ set[i]+ '"]').remove(); } } Something should be added to the exteral mockup.css? Or something to change in my master.js? Thanks for any hints/suggestions in advance.

    Read the article

  • Diffence between FQL query and Graph API object access

    - by jwynveen
    What's the difference between accessing user data with the Facebook Graph API (http://graph.facebook.com/btaylor) and using the Graph API to make a FQL query of the same user (https://api.facebook.com/method/fql.query?query=QUERY). Also, does anyone know which of them the Facebook Developer Toolkit (for ASP.NET) uses? The reason I ask is because I'm trying to access the logged in user's birthday after they begin a Facebook Connect session on my site, but when I use the toolkit it doesn't return it. However, if I make a manual call to the Graph API for that user object, it does return it. It's possible I might have something wrong with my call from the toolkit. I think I may need to include the session key, but I'm not sure how to get it. Here's the code I'm using: _connectSession = new ConnectSession(APPLICATION_KEY, SECRET_KEY); try { if (!_connectSession.IsConnected()) { // Not authenticated, proceed as usual. statusResponse = "Please sign-in with Facebook."; } else { // Authenticated, create API instance _facebookAPI = new Api(_connectSession); // Load user user user = _facebookAPI.Users.GetInfo(); statusResponse = user.ToString(); ViewData["fb_user"] = user; } } catch (Exception ex) { //An error happened, so disconnect session _connectSession.Logout(); statusResponse = "Please sign-in with Facebook."; }

    Read the article

  • Using oauth2_access_token to get connections in linkedIn

    - by Pedro
    I'm trying to get the connections in linkedIn using their API, but when I try to retrieve the connections I get a 401 unauthorized error. in the official documentation says You must use an access token to make an authenticated call on behalf of a user Make the API calls You can now use this access_token to make API calls on behalf of this user by appending "oauth2_access_token=access_token" at the end of the API call that you wish to make. The API call that I'm trying to do is the following Error -- http://api.linkedin.com/v1/people/~/connections:(id,headline,first-name,last-name)?format=json&oauth2_access_token=access_token I have tried to do it with the following endpoint without any problems. OK -- https://api.linkedin.com/v1/people/~:(id,first-name,last-name,formatted-name,date-of-birth,industry,email-address,location,headline,picture-urls::(original))?format=json&oauth2_access_token=access_token this list of endpoints for the connections API are described here http://developer.linkedin.com/documents/connections-api I just copied and pasted one endpoint from there, so the question is what's the problem with the endpoint for getting the connections? what am I missing? EDIT: for the preAuth Url I'm using https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=ConsumerKey&scope=r_fullprofile%20r_emailaddress%20r_network&state&state=NewGuid&redirect_uri=Encoded_Url https://www.linkedin.com/uas/oauth2/accessToken?grant_type=authorization_code&code=QueryString_Code&redirect_uri=EncodedCallback&client_id=ConsummerKey&client_secret=ConsumerSecret please find attached the login screen requesting the permissions EDIT2: Switched to https and worked like a charm!

    Read the article

  • Is Appcelerator Titanium now banned on the iPhone?

    - by altuzar
    This question has been answered quite clearly for MonoTouch here: http://stackoverflow.com/questions/2604033/is-monotouch-now-banned-on-the-iphone But what about Appcelerator Titanium? The new TOS from Apple and their iPhone 4 OS: 3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited). Titanium uses JavaScript but is not executed be the iPhone OS WebKit engine directly. In their Developer blog, Jeff Haynie says Titanium is on the clear, but I don't know if they are in denial. It’s our belief that we are fully in compliance with iPhone OS 4.0 ToS as we interpret them. I haven't found any official word by Apple, only opinions. And I'm quite confussed. I'm not writing another line of code for my App until... you know.

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >