Search Results

Search found 14993 results on 600 pages for 'anonymous the great'.

Page 126/600 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Entity Framework 4.0: Optimal and horrible SQL

    - by DigiMortal
    Lately I had Entity Framework 4.0 session where I introduced new features of Entity Framework. During session I found out with audience how Entity Framework 4.0 can generate optimized SQL. After session I also showed guys one horrible example about how awful SQL can be generated by Entity Framework. In this posting I will cover both examples. Optimal SQL Before going to code take a look at following model. There is class called Event and I will use this class in my query. Here is the LINQ To Entities query that uses small anonymous type. var query = from e in _context.Events             select new { Id = e.Id, Title = e.Title }; Debug.WriteLine(((ObjectQuery)query).ToTraceString()); Running this code gives us the following SQL. SELECT      [Extent1].[event_id] AS [event_id],      [Extent1].[title] AS [title]  FROM [dbo].[events] AS [Extent1] This is really small – no additional fields in SELECT clause. Nice, isn’t it? Horrible SQL Ayende Rahien blog shows us darker side of Entiry Framework 4.0 queries. You can find comparison betwenn NHibernate, LINQ To SQL and LINQ To Entities from posting What happens behind the scenes: NHibernate, Linq to SQL, Entity Framework scenario analysis. In this posting I will show you the resulting query and let you think how much better it can be done. Well, it is not something we want to see running in our servers. I hope that EF team improves generated SQL to acceptable level before Visual Studio 2010 is released. There is also morale of this example: you should always check out the queries that O/R-mapper generates. Behind the curtains it may silently generate queries that perform badly and in this case you need to optimize you data querying strategy. Conclusion Entity Framework 4.0 is new product with a lot of new features and it is clear that not everything is 100% super in its first release. But it still great step forward and I hope that on 12.04.2010 we have new promising O/R-mapper available to use in our projects. If you want to read more about Entity Framework 4.0 and Visual Studio 2010 then please feel free to follow this link to list of my Visual Studio 2010 and .NET Framework 4.0 postings.

    Read the article

  • Clean Code Developer & Certification in IT - MSCC 21.09.2013

    It was a very busy weekend this time, and quite some hectic to organise the second meetup on a Saturday for the Mauritius Software Craftsmanship Community (MSCC) but it was absolutely fun. Following, I'm writing a brief summary about the topics we spoke about and the new impulses I got. "What a meetup... I was positively impressed. At the beginning I thought that noone would actually show up but then by the time the room got filled. Lots of conversation, great dialogues and fantastic networking between fresh students, experienced students, experienced employees, and self-employed attendees. That's what community is all about!" Above quote was my first reaction shortly after the gathering. And despite being busy during the weekend and yesterday, I took my time to reflect a little bit on things happened and statements made before writing it here on my blog. Additionally, I was also very curious about possible reactions and blogs from other attendees. Reactions from other craftsmen Let me quickly give you some links and quotes from others first... "Like Jochen posted on facebook, that was indeed a 5+ hours marathon (maybe 4 hours for me but still) … Wohoo! We’re indeed a bunch of crazy geeks who did not realise how time flew as we dived into the myriad discussions that sprouted. Yet in the end everyone was happy (:" -- Ish on MSCC meetup - The marathon (: "And the 4hours spent @ Talking drums bore its fruit..I was doing something I never did before....reading the borrowed book while walking....and though I was not that familiar with things mentionned in the book...I was skimming,scanning & flipping...reading titles...short paragraphs...and I skipped pages till I reached home." -- Yannick on Mauritius Software Craftsmanship 1st Meet-up "Hi Developers, Just wanted to share with you the meetups i attended last Saturday - [...] - The second meetup is the one hosted by Jochen Kirstätter, the MSCC, where the attendees were Craftsman, no woman, this time - all sharing the same passion of being a developer - even though it is on different platforms(Windows - Windows Phone - Linux - Adobe(yes a designer) - .Net) - but we manage to sit at the same table - sharing developer views and experience in the corporate world - also talking about good practice when coding( where Jochen initiated a discussion on Clean Coding ) i could not stay till the end - but from what i have heard - the longer you stay the more fun you have till 1600. Developers in the Facebook grouping i invite you to stay tuned about the various developer communities popping up - where you can come to share and learn good practices, develop the entrepreneurial spirit, and learn and share your passion about technologies" -- Arnaud on Facebook More feedback has been posted on the event directly. So, should I really write more? Wouldn't that spoil the impressions? Starting the day with a surprise Indeed, I was very pleased to stumble over the existence of Mobile Monday Mauritius on LinkedIn, an association about any kind of mobile app development, mobile gadgets and latest smartphones on the market. Despite the Monday in their name they had scheduled their recent meeting on Saturday between 10:00 and 12:00hrs. Wow, what a coincidence! Let's grap the bull by its horns and pay them an introductory visit. As they chose the Ebene Accelerator at the Orange Tower in Ebene it was a no-brainer to leave home a bit earlier and stop by. It was quite an experience and fun to talk to the geeks over there. Really looking forward to organise something together.... Arriving at the venue As the children got a bit uneasy at the MoMo gathering and I didn't want to disturb them too much, we arrived early at Bagatelle. Well, no problems as we went for a decent breakfast at Food Lover's Market. Shortly afterwards we went to our venue location, Talking Drums, and prepared the room for the meeting. We only had to take off a repro-painting of the wall in order to have a decent area for the projector. All went very smooth and my two little ones were of great help. Just in time, our first craftsman Avinash arrived on the spot. And then the waiting started... Luckily, not too long. Bit by bit more and more IT people came to join our meeting. Meanwhile, I used the time to give a brief introduction about the MSCC in general, what we are (hm, maybe I am) trying to achieve and that the recent phase is completely focused on creating more awareness that a community like the MSCC is active here in Mauritius. As soon as we reached some 'critical mass' of about ten people I asked everyone for a short introduction and bio, just in case... Conversation between participants started to kick in and we were actually more networking than having a focus on our topics of the day. Quick updates on latest news and development around the MSCC Finally, Clean Code Developer No matter how the position is actually called, whether it is Software Engineer, Software Developer, Programmer, Architect, or Craftsman, anyone working in IT is facing almost the same obstacles. As for the process of writing software applications there are re-occurring patterns and principles combined with some common exercise and best practices on how to resolve them. Initiated by the must-read book 'Clean Code' by Robert C. Martin (aka Uncle Bob) the concept of the Clean Code Developer (CCD) was born already some years ago. CCD is much likely to traditional martial arts where you create awareness of certain principles and learn how to apply practices to improve your style. The CCD initiative recommends to indicate your level of knowledge and experience with coloured wrist bands - equivalent to the belt colours - for various reasons. Frankly speaking, I think that the biggest advantage here is provided by the obvious recognition of conceptual understanding. For example, take the situation of a team meeting... A member with a higher grade in CCD, say Green grade, sees that there are mainly Red grades to talk to, and adjusts her way of communication to their level of understanding. The choice of words might change as certain elements of CCD are not yet familiar to all team members. So instead of talking in an abstract way which only Green grades could follow the whole scenario comes down to Red grade level. Different story, better results... Similar to learning martial arts, we only covered two grades during this occasion - black and red. Most interestingly, there was quite some positive feedback and lots of questions about the principles and practices of the red grade. And we gathered real-world examples from various craftsman and discussed them. Following the Clean Code Developer Red Grade and some annotations from our meetup: CCD Red Grade - Principles Don't Repeat Yourself - DRY Keep It Simple, Stupid (and Short) - KISS Beware of Optimisations! Favour Composition over Inheritance - FCoI Interestingly most of the attendees already heard about those key words but couldn't really classify or categorize them. It's very similar to a situation in which you do not the particular for a thing and have to describe it to others... until someone tells you the actual name and suddenly all is very simple. CCD Red Grade - Practices Follow the Boy Scouts Rule Root Cause Analysis - RCA Use a Version Control System Apply Simple Refactoring Pattern Reflect Daily Introduction to the principles and practices of Clean Code Developer - here: Red Grade As for the various ToDo's we commonly agreed that the Boy Scout Rule clearly is not limited to software development or IT administration but applies to daily life in general. Same for the root cause analysis, btw. We really had good stories with surprisingly endings and conclusions. A quick check about who is using a version control system brought more drive into the conversation. Not only that we had people that aren't using any VCS at all, we also had the 'classic' approach of backup folders and naming conventions as well as the VCS 'junkie' that has to use multiple systems at a time. Just for the records: Git and GitHub seem to be in favour of some of the attendees. Regarding the daily reflection at the end of the day we came up with an easy solution: Wrap it up as a blog entry! Certifications in IT This is kind of a controversy in IT in general. Is it interesting to go for certifications or are they completely obsolete? What are the possibilities to get certified? What are the options we have in Mauritius? How would certificates stand compared to other educational tracks like Computer Science or Web Design. The ratio between craftsmen with certifications like MCP, MSTS, CCNA or LPI versus the ones without wasn't in favour for the first group but there was a high interest in the topic itself and some were really surprised to hear that exam preparations are completely free available online including temporarily voucher codes for either discounts or completely free exams. Furthermore, we discussed possible options on forming so-called study groups on a specific certificates and organising more frequent meetups in order to learn together. Taking into consideration that we have sponsored access to the video course material of Pluralsight (and now PeepCode as well as TrainSignal), we might give it a try by the end of the year. Current favourites are LPIC Level 1 and one of the Microsoft exams 40-78x. Feedback and ideas for the MSCC The closing conversations and discussions about how the MSCC is recently doing, what are the possibilities and what's (hopefully) going to happen in the future were really fertile and I made a couple of mental bullet points which I'm looking forward to tackle down together with orher craftsmen. Eventually, it might be a good option to elaborate on some issues during our weekly Code & Coffee sessions one Wednesday morning. Active discussion on various IT topics like certifications (LPI, MCP, CCNA, etc) and sharing experience Finally, we made it till the end of the planned time. Well, actually the talk was still on and we continued even after 16:00hrs. Unfortunately, we (the children and I) had to leave for evening activities. My resume of the day... It was great to have 15 craftsmen in one room. There are hundreds of IT geeks out there in Mauritius, and as Mauritius Software Craftsmanship Community we still have a lot of work to do to pass on the message to some more key players and companies. Currently, it seems that we are able to attract a good number of students in Computer Science... but we have a lot more to offer, even or especially for IT people on the job. I'm already looking forward to our next Saturday meetup in the near future. PS: Meetup pictures are courtesy of Nirvan Pagooah. Thanks for sharing...

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Simple Mouse Move Event in F# with Winforms

    - by MarkPearl
    This evening I had the pleasure of reading one of ThomasP’s blog posts on first class events. It was an excellent read, and I thought I would make a brief derivative of his post to explore some of the basics. In Thomas’s post he has a form with an ellipse on it that when he clicks on the ellipse it pops up a message box with the button clicked… awesome. Something that got me on the post though was the code similar to the one below… // React to Mouse Move events on the form let evtMessages = frm.MouseMove |> Event.map (fun mi -> mi.Location.ToString()) |> Event.map (sprintf "Hey, you clicked on the ellipse.\nUsing: %s") |> Event.add (MessageBox.Show >> ignore) The MessageBox is a function with a string passed into it. What if I wanted to rather change a mutable value holder instead, how would the syntax go for that? Immediately the thought came to me of anonymous functions. I’ve used them before to do something like this… let HelloPerson personName = "Hello " + personName |> fun(x) -> Console.WriteLine(x) So using the same approach I adapted the event code to instead of showing a Message Box with a string passed in to it, to rather change the forms header. |> Event.map (sprintf "Your mouse position is %s") |> Event.add(fun(x) -> frm.Text <- x) Okay… it looks a bit weird with the –> x <- syntax, but makes sense and works… The next thing I wanted to do was change Thomas’s code sample from having an ellipse, and reacting to the position of the mouse and click, to rather trigger the event whenever the mouse moved. This simple involved removing some filtering code. Finally I wanted the code to work as a FSharp Project without having to run through the F# interactive. To achieve this I just needed to find out how to trigger the window event loop. This can be achieved with the code below… // Program eventloop while frm.Created do Application.DoEvents()   So lets look at the complete code sample… #light open System open System.Drawing open System.Windows.Forms // Create the main form let frm = new Form(ClientSize=Size(600,400)) // React to Mouse Move events on the form let evtMessages = frm.MouseMove |> Event.map (fun mi -> mi.Location.ToString()) |> Event.map (sprintf "Your mouse position is %s") |> Event.add(fun(x) -> frm.Text <- x) // Show the form frm.Show() // Program eventloop while frm.Created do Application.DoEvents()

    Read the article

  • LINQ: Enhancing Distinct With The SelectorEqualityComparer

    - by Paulo Morgado
    On my last post, I introduced the PredicateEqualityComparer and a Distinct extension method that receives a predicate to internally create a PredicateEqualityComparer to filter elements. Using the predicate, greatly improves readability, conciseness and expressiveness of the queries, but it can be even better. Most of the times, we don’t want to provide a comparison method but just to extract the comaprison key for the elements. So, I developed a SelectorEqualityComparer that takes a method that extracts the key value for each element. Something like this: public class SelectorEqualityComparer<TSource, Tkey> : EqualityComparer<TSource> where Tkey : IEquatable<Tkey> { private Func<TSource, Tkey> selector; public SelectorEqualityComparer(Func<TSource, Tkey> selector) : base() { this.selector = selector; } public override bool Equals(TSource x, TSource y) { Tkey xKey = this.GetKey(x); Tkey yKey = this.GetKey(y); if (xKey != null) { return ((yKey != null) && xKey.Equals(yKey)); } return (yKey == null); } public override int GetHashCode(TSource obj) { Tkey key = this.GetKey(obj); return (key == null) ? 0 : key.GetHashCode(); } public override bool Equals(object obj) { SelectorEqualityComparer<TSource, Tkey> comparer = obj as SelectorEqualityComparer<TSource, Tkey>; return (comparer != null); } public override int GetHashCode() { return base.GetType().Name.GetHashCode(); } private Tkey GetKey(TSource obj) { return (obj == null) ? (Tkey)(object)null : this.selector(obj); } } Now I can write code like this: .Distinct(new SelectorEqualityComparer<Source, Key>(x => x.Field)) And, for improved readability, conciseness and expressiveness and support for anonymous types the corresponding Distinct extension method: public static IEnumerable<TSource> Distinct<TSource, TKey>(this IEnumerable<TSource> source, Func<TSource, TKey> selector) where TKey : IEquatable<TKey> { return source.Distinct(new SelectorEqualityComparer<TSource, TKey>(selector)); } And the query is now written like this: .Distinct(x => x.Field) For most usages, it’s simpler than using a predicate.

    Read the article

  • BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane

    - by Christopher Karl Chan
    @page { margin: 0.79in } P { margin-bottom: 0.08in } --Focus So this blog entry will focus on BPM Swimlane roles and users from a ADF context. So we have an ADF Task Details Form and we are in the process of making it richer and dynamic in functionality. A common requirement could be to dynamically show different areas based on the user logged into the workspace. Perhaps even we want to know even what swim-lane role the user belongs to. It is is a little bit harder to achieve then one thinks unless you know the trick. The Challenge The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace. So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do? The Magic First add the BC4J Security library to your view project. Then Restart JDeveloper. Now find the web.xml file in the view project of your ADF Task Details Application and look for the JpsFilter section. Then add in the following section. <init-param> <param-name>application.name</param-name> <param-value>OracleBPMProcessRolesApp</param-value></init-param> This will link your application to that of the BPM workspace. Then in your dynamic part of your ADF form you can now check whether the user logged into the BPM Workspace belongs in a BPM swim-lane in any BPM process. The best way to do this is by using expression language in the JSF page itself. Here I am simply changing the rendered flag to either true or false and thereby hiding or showing a section. Perhaps you are re-using the same form for a task in an approver swim-lane and ordinary user swimlane. So we only want the approver to see this field. So call the built in function to check if the user is a member of the BPM swim-lane role. The name of the role must be of the syntax BPMProject.RoleName <af:outputText value="This will only be rendered when the user is part of the BPM Swimlane Role rendered="#{securityContext.userInRole['BPMProjectName.Rolename']}"/> Now you must redeploy your ADF Task Form project Now (in the image above) the text will ONLY get rendered in the Task Details Form only if the user logged into the workspace is a member of the swimlane Unsecure of the BPM project SimpleTask

    Read the article

  • Summary of the Solaris 11 webcast's livechat QnA session

    - by Karoly Vegh
    This is a followup post to the previous summary on the "What's new with Solaris 11 since the launch" webcast. That webcast has had a chatroom for a live Questions and Answers session running. I went through the archive of those and compiled a list of some of the (IMHO) most relevant and most frequently asked questions, I'd like to share. This is the first part, covering the QnA of Session I and II of the webcast, in a followup post we can have a look of the rest of the sessions if required - let me know in the comments. Also, should you have questions, as usual, feel free to ask those there, too.  ...and here come the answered questions:  When will Exadata be based on Solaris in place of Oracle Enterprise Linux?Exadata offers both Solaris 11 or Oracle Enterprise Linux.  The choice can be made at deployment time based on your OS needs.What are all other benefits and futures avilable in solaris 11 (cloud O.S.) compared to cloud based Red Hat Linux and Windows?suggest you check out our cloud white paper for a view of this. Also the OTN Solaris 11 page has some good articles. Here are the links:  http://www.oracle.com/technetwork/server-storage/solaris11/documentation/o11-106-sol11-cloud-501066.pdf http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.htmlWill 11.1 have a more complete IPS respository for Oracle and FOSS software?Yes, we are adding additional packages to the various package repositories. Since Solaris 11 was launched, both the Oracle Solaris Studio tools as well as Oracle Solaris Cluster have been made available along with numerous new FOSS packages. We will continue to be adding additional Oracle products and open source packages in the future. Will Exadata be based on Sparc in place of intel-amd x86 in next future ?We can't publically discuss futures, but we actually have a SPARC version of Exadata today, it's called SuperCluster, this is such a powerfull multipurpose system that it actually have multiple personalities built into one system: Exadata, Exalogic, and it can be a general purpose platform if you want. Have I understood this right? Livepatching KSplice-style is coming to Solaris 11 too?We're looking at that for certain types of Solaris patches in the future.Will there be a security framework like SST/JASS for Solaris 11?We can't talk about the future projects on a public forum, but we recognize the need for SST/JASS and want to address this as soon as possible. On the other side there are a whole bunch of "best practices" that are now embedded into Solaris 11 by default, so out of the box Solaris 11 should already address part of what SST/JASS gave you. (For example we did a lot of work on improving the auditing performance so that we can now have it turned on by default). On x86 can install VirtualBox in a Zone and use that to host other OSes.Yes, this was one of the first things we made sure would work when we acquired VirtualBox when we were still Sun Microsystems. If I have a Solaris 11 Control Domain on a T-series, can I run a Solaris 10 Ldom with Solaris 8 branded containers?Yes, you can.Is Oracle Solaris free or do we need to purchase?Solaris is free, the entitlement to run it comes either with a Sun system (new or historical) or for 3rd party systems the entitlement comes with a support contract. Note that for production use you will be expected to get a support contract. If you don't want to use the Solaris system (Sun or 3rd party) for production use (i.e. development) you can get an OTN license on the Oracle Technical Network website. Will encryption and deduplication both work on a share?This should work at the same time. What approaches does Solaris use to monitor usage?There are many different tools in Solaris to monitor usage. The main ones are the "stats" (vmstat, mpstat, prstat, ...), the kstat interface, and DTrace (to get details you couldn't see before). And then there are layered tools that can interface with these tools (Ops Center, BMC, CA, Tivoli, ...) Apart little-endian, big-endian how is it easy to port Solaris applications on Sparc to x86 and vice-versa ?Very easy. Except for certain hardware specific applications (those that utilize hardware specific drivers), all of the same Oracle Solaris APIs exist for all architectures. Is IPS based patching aware of the fact that zones can reside on ZFS and move from one physical server to another ?IPS is definitely aware of zones and uses ZFS to support boot environments for non-global zones in the same way that's used for the global zone. With respect to moving a zone from one physical server to another, Solaris 11 supports to the same zone attach/deattach method that was introduced in Solaris 10. Is vnic support in Ldoms planned?This is currently being investigated for a future LDOM release. Is it possible with the new patching system to build a system later with the same patch level as a system built a few months earlier?Yes, you can choose/define exactly which version should go to the system and it will always put the same bits in place. The technical answer is that you choose the version of the "entire" package you want on the system and the rest flows from there. Is it in the plans to allow zones to add/remove zpools to running zones dynamically in future updates?Work in this area is currently under investigation. Any plans to realese Solaris 11 source code? i.e. opensolaris?We currently can't comment on publicly releasing the source code. If you need/want this access please let your Oracle account team know. What about VirtualBox and Solaris11 for virtualization?Solaris 11 works great with VirtualBox, as both a client and a host system. Will Oracle DB software eventually be supplied as IPS packages? When?We don't have a date yet but this is actively being worked on. What are the new artifacts in Oracle Solaris 11 than the previous versions?There are quite a few actually. The best start is to look at our "Evaluate Solaris 11" page, and there you also can find a Transition Guide. http://www.oracle.com/technetwork/server-storage/solaris11/overview/evaluate-1530234.html So, this seems just like RedHat's YUM environment?IPS offers certain features beyond those in YUM or other packaging systems. For example, IPS works with ZFS and Solaris Boot Environments to provide a safe environment for software lifecycle management so that changes can be reverted by switching to an older boot environment. With Zones on solaris 11, can I do paravirtualitation?The great thing about zones is you don't *need* paravirtualization. You're making the same direct kernel calls that you would outside of a zone.  It's an incredibly significant performance win over hypervisor-based virtualization. Are zones/containers officially supported to run Oracle Databases?  EBIZ?Hi Calvin, the answer is yes, here is the support matrix for DB:  http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html I've found some nasty bugs in Solaris 11 (one of which today) that have been fixed in community forks (i.e., Illumos). Will Oracle ever restart collaboration with the community?We continue to work with the community, just not as open on all projects as we did before (For example IPS is an open project) and the source of more than half of the Solaris packages is posted on our opensource websites. I can't comment on what we will do in the future. And with regards to bugs please file them through the support organization and we will get them resolved. Is zpool vdev removal on-the fly now possible ?This issue is actively being investigated although we don't have a date for when this feature will be available. Is pgstat now the official replacement for corestat ?It's intended to provide similar functionality Where are the opensource website?For Oracle Solaris, visit http://www.oracle.com/technetwork/opensource/systems-solaris-1562786.html As a cloud-scale virtualization, is it going to be easier to move zones between machines? maybe even automatic in case of a hardware failure?Hi Gashaw, we already have customers that have implemented what they refer to as "flying zones" that they can move around very easily. They use Solaris Cluster to do this. What about VMware vMotion like feature?We have secure live migration with both Logical Domains on SPARC T series systems, and with Oracle VM on x86 systems. When running Solaris 10/11 on an enterprise server with a lot of zones, what are best practises commands to show the system is running fine? (has enough hardware resources). For example CPU / Memory / I/O / system load. What are the recommended values?For Solaris 11, look into the new zonestat(1M) command that provides a great deal of information about zone utilization. In addition, there is new work underway in providing additional observability in areas such as per-zone file system I/O. Java optimizations done with Solaris 11? For X86 platforms too? Where can I find more detail about this?There is lots of work that go into optimizing Java for Oracle Solaris 10 & 11 on both SPARC and x86. See http://www.oracle.com/technetwork/articles/servers-storage-dev/solarisforjavadevelop-168642.pdf What is meant by "ZFS Shadow Migration"?It's a way to migrate data from another file system to ZFS: http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-3.html Is flash archive available with S11?Flash archive is not.  There is a procedure for disaster recovery, and we're working on a modern archive-based deployment tool for a future update.  The disaster recovery tool is here: http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-091-sol-dis-recovery-489183.html  You can also use Distribution Constructor to build common golden images. Will solaris 11 be available on the ODA soon?The idea's under evaluation -- we'll share your interest with the team. What steps can be taken to ensure that breaches of security are identified quickly?There are a number of tools, including the "bart" tool and "pkg verify" to ensure that software has not been compromised.  Solaris Audit can also be used to detect unauthorized access.  You can also use Immutable Zones to protect against compromise.  There are a wide variety of security tools, and I've covered only a few. What is the relation from solaris to java 7 speed optimization?There is constant work done between the Oracle Solaris and Java teams on performance optimizations. See http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html for examples. What is the difference in the Solaris 11 installation compared to solaris 10 ? where i can find the document describing basic repository concepts ?The best place to start is: http://www.oracle.com/technetwork/server-storage/solaris11/index.html Hope you found the post useful. For questions, input, requests for the second half of the QnA, please find the comment section below.  -- charlie  

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • Credentials Not Passed From SharePoint WebPart to WCF Service

    - by Jacob L. Adams
    I have spent several hours trying to resolve this problem, so I wanted to share my findings in case someone else might have the same problem. I had a web part which was calling out to a WCF service on another server to get some data. The code I had was essentially using System.ServiceModel; using System.ServiceModel.Channels; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result = someOtherService.SomeServiceMethod(); This code would run fine on my local instance of SharePoint 2010 (Windows 7 64-bit). However, when I would deploy it to the testing environment, I would get a yellow screen of death  with the following message: The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate,NTLM'. I then went through the usual checklist of Windows Authentication problems: Check WCF bindings to make sure authentication is set correctly Check IIS to make sure Windows Authentication is enabled and anonymous authentication was disabled. Check to make sure the SharePoint server trusted the server hosting the WCF service Verify that the account that the IIS application pool is running under has access to the other server I then spend lot of time digging into really obscure IIS, machine.config, and trust settings (as well of lots of time on Google and StackOverflow). Eventually I stumbled upon a blog post by Todd Bleeker describing how to run code under the application pool identity. Wait, what? The code is not already running under application pool identity? Another quick Google search led me to an MSDN page that imply that SharePoint indeed does not run under the app pool credentials by default. Instead SPSecurity.RunWithElevatedPrivileges is needed to run code under the app pool identity. Therefore, changing my code to the following worked seamlessly using System.ServiceModel; using System.ServiceModel.Channels; using Microsoft.SharePoint; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result; SPSecurity.RunWithElevatedPrivileges(()=> { result = someOtherService.SomeServiceMethod(); });

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-21

    - by Bob Rhubart
    The Real Architects of Los Angeles: OTN Architect Day in LA - Oct 25 No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Why IT is a profession in 'flux' | ZDNet I usuallly don't post two items from the same person in one day, but this post from ZDNet blogger Joe McKendrick deals with some critical issues affecting those in IT. As McKendrick puts it: "IT professionals are under considerable pressure to deliver more value to the business, versus being good at coding and testing and deploying and integrating." Cloud, automation drive new growth in SOA governance market | ZDNet "SOA governance tools and processes learned over the past decade are now underpinning cloud projects as they scale across enterprises," reports Joe McKendrick. But there remains a lack of understanding about SOA Governance. For a broader discussion of the importance of IT governance check out the lastest OTN Archbeat Podcast: By Any Other Name: Governance and Architecture How I Simplified the Installation of Oracle Database on Oracle Linux 6 Michele Casey's update of Ginny Henningsen's original article. This version shows how to simplify the installation of Oracle Database 11g on Oracle Linux 6 by installing the oracle-rdbms-server-11gR2-preinstall RPM package. Fault Handling Slides and Q&A | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen shares the slides and a Q&A transcript from a presentation he and fellow ACE Director Guido Schmutz gave at the recent Oracle OpenWorld and JavaOne preview event organized by AMIS Technology. BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane | Christopher Karl Chan "The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace," says Oracle Fusion Middleware A-Team member Christopher Karl Chan. "So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do?" Don't guess—read the post and find out. Thought for the Day "The speed of change makes you wonder what will become of architecture. " — Tadao Ando Source: BrainyQuote

    Read the article

  • Using INotifyPropertyChanged in background threads

    - by digitaldias
    Following up on a previous blog post where I exemplify databinding to objects, a reader was having some trouble with getting the UI to update. Here’s the rough UI: The idea is, when pressing Start, a background worker process starts ticking at the specified interval, then proceeds to increment the databound Elapsed value. The problem is that event propagation is limeted to current thread, meaning, you fire an event in one thread, then other threads of the same application will not catch it. The Code behind So, somewhere in my ViewModel, I have a corresponding bethod Start that initiates a background worker, for example: public void Start( ) { BackgroundWorker backgroundWorker = new BackgroundWorker( ); backgroundWorker.DoWork += IncrementTimerValue; backgroundWorker.RunWorkerAsync( ); } protected void IncrementTimerValue( object sender, DoWorkEventArgs e ) { do { if( this.ElapsedMs == 100 ) this.ElapsedMs = 0; else this.ElapsedMs++; }while( true ); } Assuming that there is a property: public int ElapsedMs { get { return _elapsedMs; } set { if( _elapsedMs == value ) return; _elapsedMs = value; NotifyThatPropertyChanged( "ElapsedMs" ); } } The above code will not work. If you step into this code in debug, you will find that INotifyPropertyChanged is called, but it does so in a different thread, and thus the UI never catches it, and does not update. One solution Knowing that the background thread updates the ElapsedMs member gives me a chance to activate BackgroundWorker class’ progress reporting mechanism to simply alert the main thread that something has happened, and that it is probably a good idea to refresh the ElapsedMs binding. public void Start( ) { BackgroundWorker backgroundWorker = new BackgroundWorker( ); backgroundWorker.DoWork += IncrementTimerValue; // Listen for progress report events backgroundWorker.WorkerReportsProgress = true; // Tell the UI that ElapsedMs needs to update backgroundWorker.RunWorkerCompleted += ( sender, e ) => { NotifyThatPropertyChanged( "ElapsedMs" ) }; backgroundWorker.RunWorkerAsync( ); } protected void IncrementTimerValue( object sender, DoWorkEventArgs e ) { do { if( this.ElapsedMs == 100 ) this.ElapsedMs = 0; else this.ElapsedMs++; // report any progress ( sender as BackgroundWorker ).ReportProgress( 0 ); }while( true ); } What happens above now is that I’ve used the BackgroundWorker cross thread mechanism to alert me of when it is ok for the UI to update it’s ElapsedMs field. Because the property itself is being updated in a different thread, I’m removing the NotifyThatPropertyChanged call from it’s Set method, and moving that responsability to the anonymous method that I created in the Start method. This is one way of solving the issue of having a background thread update your UI. I would be happy to hear of other cross-threading mechanisms for working in a MCP/MVC/MVVM pattern.

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Obtaining positional information in the IEnumerable Select extension method

    - by Kyle Burns
    This blog entry is intended to provide a narrow and brief look into a way to use the Select extension method that I had until recently overlooked. Every developer who is using IEnumerable extension methods to work with data has been exposed to the Select extension method, because it is a pretty critical piece of almost every query over a collection of objects.  The method is defined on type IEnumerable and takes as its argument a function that accepts an item from the collection and returns an object which will be an item within the returned collection.  This allows you to perform transformations on the source collection.  A somewhat contrived example would be the following code that transforms a collection of strings into a collection of anonymous objects: 1: var media = new[] {"book", "cd", "tape"}; 2: var transformed = media.Select( item => 3: { 4: Media = item 5: } ); This code transforms the array of strings into a collection of objects which each have a string property called Media. If every developer using the LINQ extension methods already knows this, why am I blogging about it?  I’m blogging about it because the method has another overload that I hadn’t seen before I needed it a few weeks back and I thought I would share a little about it with whoever happens upon my blog.  In the other overload, the function defined in the first overload as: 1: Func<TSource, TResult> is instead defined as: 1: Func<TSource, int, TResult>   The additional parameter is an integer representing the current element’s position in the enumerable sequence.  I used this information in what I thought was a pretty cool way to compare collections and I’ll probably blog about that sometime in the near future, but for now we’ll continue with the contrived example I’ve already started to keep things simple and show how this works.  The following code sample shows how the positional information could be used in an alternating color scenario.  I’m using a foreach loop because IEnumerable doesn’t have a ForEach extension, but many libraries do add the ForEach extension to IEnumerable so you can update the code if you’re using one of these libraries or have created your own. 1: var media = new[] {"book", "cd", "tape"}; 2: foreach (var result in media.Select( 3: (item, index) => 4: new { Item = item, Index = index })) 5: { 6: Console.ForegroundColor = result.Index % 2 == 0 7: ? ConsoleColor.Blue : ConsoleColor.Yellow; 8: Console.WriteLine(result.Item); 9: }

    Read the article

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • Is it unusual for a small company (15 developers) not to use managed source/version control?

    - by LordScree
    It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do one of the many release actions we have to do, which could mean whole versions are not stored correctly. It could take hours or even days to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release of one or the other because some work has not been completed yet. We also have to force customers to take versions that include new features even if they just want a bug fix, because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple developers are using the same projects at the same time (not the same files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't it be better to have a standard company-wide approach we all have to follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance

    Read the article

  • Which of CouchDB or MongoDB suits my needs?

    - by vonconrad
    Where I work, we use Ruby on Rails to create both backend and frontend applications. Usually, these applications interact with the same MySQL database. It works great for a majority of our data, but we have one situation which I would like to move to a NoSQL environment. We have clients, and our clients have what we call "inventories"--one or more of them. An inventory can have many thousands of items. This is currently done through two relational database tables, inventories and inventory_items. The problems start when two different inventories have different parameters: # Inventory item from inventory 1, televisions { inventory_id: 1 sku: 12345 name: Samsung LCD 40 inches model: 582903-4 brand: Samsung screen_size: 40 type: LCD price: 999.95 } # Inventory item from inventory 2, accomodation { inventory_id: 2 sku: 48cab23fa name: New York Hilton accomodation_type: hotel star_rating: 5 price_per_night: 395 } Since we obviously can't use brand or star_rating as the column name in inventory_items, our solution so far has been to use generic column names such as text_a, text_b, float_a, int_a, etc, and introduce a third table, inventory_schemas. The tables now look like this: # Inventory schema for inventory 1, televisions { inventory_id: 1 int_a: sku text_a: name text_b: model text_c: brand int_b: screen_size text_d: type float_a: price } # Inventory item from inventory 1, televisions { inventory_id: 1 int_a: 12345 text_a: Samsung LCD 40 inches text_b: 582903-4 text_c: Samsung int_a: 40 text_d: LCD float_a: 999.95 } This has worked well... up to a point. It's clunky, it's unintuitive and it lacks scalability. We have to devote resources to set up inventory schemas. Using separate tables is not an option. Enter NoSQL. With it, we could let each and every item have their own parameters and still store them together. From the research I've done, it certainly seems like a great alterative for this situation. Specifically, I've looked at CouchDB and MongoDB. Both look great. However, there are a few other bits and pieces we need to be able to do with our inventory: We need to be able to select items from only one (or several) inventories. We need to be able to filter items based on its parameters (eg. get all items from inventory 2 where type is 'hotel'). We need to be able to group items based on parameters (eg. get the lowest price from items in inventory 1 where brand is 'Samsung'). We need to (potentially) be able to retrieve thousands of items at a time. We need to be able to access the data from multiple applications; both backend (to process data) and frontend (to display data). Rapid bulk insertion is desired, though not required. Based on the structure, and the requirements, are either CouchDB or MongoDB suitable for us? If so, which one will be the best fit? Thanks for reading, and thanks in advance for answers. EDIT: One of the reasons I like CouchDB is that it would be possible for us in the frontend application to request data via JavaScript directly from the server after page load, and display the results without having to use any backend code whatsoever. This would lead to better page load and less server strain, as the fetching/processing of the data would be done client-side.

    Read the article

  • Access to message queuing system is denied MSMQ?

    - by user1401694
    My problem is a little confusing. I have 2 servers (Windows Server 2008 R2) with MSMQ installed and I want to use Server B to consume a MessageQueue on Server A. When I try to Receive it always throws a message error: "Access to message queuing system is denied.". IP between them. Server A: 172.31.23.130 Server B: 172.31.23.195 FormatName:Direct=TCP:172.31.23.195\private$\queuesource (It's working for Sends) I can ping each server from the other; The firewall is disabled; The "queuesource" has Full Control to "Everyone", "Anonymous Logon", "Network", "Network Services"; Journal is disabled; Authentication is ok; The queue is Transactional. My code in .Net C# is basically like this: MessageQueue _sourceQueue = new MessageQueue(); _sourceQueue.Path = "FormatName:Direct=TCP:172.31.23.195\private$\queuesource"; _sourceQueue.Receive(); // << here throw an exception. Actually I'm using the Private Queue only to avoid Active Directory's problems. For example, if the server DNS fail all network fail. I don't know what do anymore.

    Read the article

  • Diagnosing permission problems with Cobian Backup to network share

    - by DaveBurns
    I'm running the latest Cobian 11. I have a Synology DS412 NAS. All of my machines (Mac and Windows) access this just fine when I'm logged in and I browse to it manually. I have Cobian installed as a service on two Windows machines: WinXP SP3 and Win7 x64. On both machines, the service is set to log on with my user account which is in the Windows administrator group. Backups on both machines fail with the message "Couldn't create the destination directory "\nas1\backups\foo\bar\": The filename, directory name, or volume label syntax is incorrect". I have tried setting the NAS's share to allow anonymous read/write access but it made no difference. Although I want the backups to run unattended in the middle of the night, I have tested them by running them manually while I'm logged in but no luck. Before starting that, I make sure that I can browse to the NAS with Explorer to ensure that any authentication session with Windows and the NAS has not expired. Still no luck. I have tried creating that destination directory both on the NAS before the backup and deleting it so the backup job could create it with the client's credentials but no luck. The usual answer in the Cobian support forums is that there is a permission problem. I agree. But at this point, what can I do to diagnose this further?

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • Problems with builds on TFS 2010 and resolving dependencies

    - by Jimmy Engtröm
    Hi I have a project that works great on my machine (and production servers). It's a VS2010 project running C#3.5. When letting my build server build the solution it can't resolve a couple of my third party dll's. Error message: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets(1360,9): warning MSB3268: The primary reference "Third.Party.Assembly, Version=50.11.2.0, Culture=neutral, PublicKeyToken=0561a7c6dbd6f0ea, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "Microsoft.VisualBasic.Compatibility, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" which could not be resolved in the currently targeted framework. ".NETFramework,Version=v3.5". To resolve this problem, either remove the reference "Third.Party.Assembly, Version=50.11.2.0, Culture=neutral, PublicKeyToken=0561a7c6dbd6f0ea, processorArchitecture=MSIL" or retarget your application to a framework version which contains "Microsoft.VisualBasic.Compatibility, Version=8.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a". [d:\Builds\3\mySolution.sln] Everything compiles and runs great on my machine, but the build server seem to struggle. I think the Third.Party.Assembly is written in VB.net. Since the assembly is third party I can't remove the reference to "Microsoft.VisualBasic.Compatibility" and since I don't get any warnings on my computer could it really be that I'm running v3.5? Any suggestions? /Jimmy

    Read the article

  • Configuring external SMTP server on Azure VM - messages staying in queue

    - by Steph Locke
    I have an external SMTP provider: auth.smtp.1and1.co.uk I am trying to send SQL Server Reporting Services emails via this on an Windows 2012 Azure VM. It is configured sufficiently correctly for emails to be generated, but I've not configured something or mis-configured something as the emails then stay in the queue. Setup details Configured SMTP Virtual Server General: IP Address: Fixed value Access: Access Control: Authentication: ticked Anonymous access Access: Connection Control: All except the list below (which is empty) Access: relay restrictions: Only the list below (which contains 127.0.0.1), ticked 'allow all..' option Delivery: Outbound Security...:Basic Authentication with username and password completed, ticked TLS encryption Delivery: Outbound connections...:TCP port=587 Delivery: Advanced: FQDN=ServerName, smarthost=auth.smtp.1and1.co.uk I then set the following SSRS rsreportserver.config values: <SMTPServer>100.92.192.3</SMTPServer> <SendUsing>2</SendUsing> <SMTPServerPickupDirectory> c:\inetpub\mailroot\pickup </SMTPServerPickupDirectory> <From>[email protected]</From> Tried so far 1) turning the smtp service off and on again (just in case) 2) run SMTPDiag with no errors (also no emails) 3) tried turning off the firewall for the ports (and more generally to see if it made a difference) 4) tried generation from powershell which resulted with message in queue 5) added 25 and 857 as endpoint 6) perused the event log and found some warnings that appear to be about the recipient Message delivery to the remote domain 'gmail.com' failed for the following reason: Unable to bind to the destination server in DNS. Message delivery to the host '212.227.15.179' failed while delivering to the remote domain 'gmail.com' for the following reason: The remote server did not respond to a connection attempt. 7) tried pinging but this appears to be blocked on azure 8) tried more powershell sending on different domains variants (localhost, boxname, internal ip used in smtp properties, 127.0.0.1) - none resulting in success 9) tried adding a remote domain - no change Could anyone recommend what step 10 should be in fixing this issue please?

    Read the article

  • NetApp FAS 2040 LDAP Win2k8R2

    - by it_stuck
    I am trying to get my FAS2040 to action user lookups using LDAP, below is the filer configuration options: filer> options ldap ldap.ADdomain dc1.colour.domain.local ldap.base OU=Users,OU=something1,OU=something2,OU=darkside,DC=colour,DC=domain,DC=local ldap.base.group ldap.base.netgroup ldap.base.passwd ldap.enable on ldap.minimum_bind_level anonymous ldap.name domain-admin-account ldap.nssmap.attribute.gecos gecos ldap.nssmap.attribute.gidNumber gidNumber ldap.nssmap.attribute.groupname cn ldap.nssmap.attribute.homeDirectory homeDirectory ldap.nssmap.attribute.loginShell loginShell ldap.nssmap.attribute.memberNisNetgroup memberNisNetgroup ldap.nssmap.attribute.memberUid memberUid ldap.nssmap.attribute.netgroupname cn ldap.nssmap.attribute.nisNetgroupTriple nisNetgroupTriple ldap.nssmap.attribute.uid uid ldap.nssmap.attribute.uidNumber uidNumber ldap.nssmap.attribute.userPassword userPassword ldap.nssmap.objectClass.nisNetgroup nisNetgroup ldap.nssmap.objectClass.posixAccount posixAccount ldap.nssmap.objectClass.posixGroup posixGroup ldap.passwd ****** ldap.port 389 ldap.servers ldap.servers.preferred ldap.ssl.enable off ldap.timeout 20 ldap.usermap.attribute.unixaccount unixaccount ldap.usermap.attribute.windowsaccount sAMAccountName ldap.usermap.base ldap.usermap.enable on output of nsswitch.conf: hosts: files dns passwd: ldap files netgroup: ldap files group: ldap files shadow: files nis Error Message(s): [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Starting AD LDAP server address discovery for dc1.colour.domain.LOCAL. [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using DNS site query (site). [filer: auth.ldap.trace.LDAPConnection.statusMsg:info]: AUTH: TraceLDAPServer- Found no AD LDAP server addresses using generic DNS query. Could not get passwd entry for name = <random user> the filer can ping the FQDN of dc1 the filer can ping the IP of dc1 the filer cannot ping "dc1" I'm not sure where I'm going wrong, so any pointers would be great.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >