Search Results

Search found 5900 results on 236 pages for 'rest'.

Page 207/236 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • Windows Azure Recipe: Software as a Service (SaaS)

    - by Clint Edmonson
    The cloud was tailor built for aspiring companies to create innovative internet based applications and solutions. Whether you’re a garage startup with very little capital or a Fortune 1000 company, the ability to quickly setup, deliver, and iterate on new products is key to capturing market and mind share. And if you can capture that share and go viral, having resiliency and infinite scale at your finger tips is great peace of mind. Drivers Cost avoidance Time to market Scalability Solution Here’s a sketch of how a basic Software as a Service solution might be built out: Ingredients Web Role – this hosts the core web application. Each web role will host an instance of the software and as the user base grows, additional roles can be spun up to meet demand. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model provides extensibility to hook into other industry specific identity providers as well. Databases – nearly every modern SaaS application is backed by a relational database for its core operational data. If the solution is sold to organizations, there’s a good chance multi-tenancy will be needed. An emerging best practice for SaaS applications is to stand up separate SQL Azure database instances for each tenant’s proprietary data to ensure isolation from other tenants. Worker Role – this is the best place to handle autonomous background processing such as data aggregation, billing through external services, and other specialized tasks that can be performed asynchronously. Placing these tasks in a worker role frees the web roles to focus completely on user interaction and data input and provides finer grained control over the system’s scalability and throughput. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Blobs (optional) – depending on the nature of the software, users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Training & Examples These links point to online Windows Azure training labs and examples where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Developing Applications for the Cloud, 2nd Edition (eBook) This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud using the latest versions of the Windows Azure Platform and tools. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Fabrikam Shipping (SaaS reference application) This is a full end to end sample scenario which demonstrates how to use the Windows Azure platform for exposing an application as a service. We developed this demo just as you would: we had an existing on-premises sample, Fabrikam Shipping, and we wanted to see what it would take to transform it in a full subscription based solution. The demo you find here is the result of that investigation See my Windows Azure Resource Guide for more guidance on how to get started, including more links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Playing with http page cycle using JustMock

    - by mehfuzh
    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process through ProcessRequest call, now if we take a look inside the method with reflector.net,  the call trace will go like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetInstrinsics ,  it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. With this clue at hand, we can easily know the classes / calls  we need to mock in order to get through the expected call. Accordingly, for  HttpBrowserCapabilities our required test code will look like: Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();   page.PreInit +=      delegate      {          pageSteps.Enqueue(PageStep.PreInit);      }; page.Load +=      delegate      {          pageSteps.Enqueue(PageStep.Load);      };   page.PreRender +=      delegate      {          pageSteps.Enqueue(PageStep.PreRender);      };   page.Unload +=      delegate      {          pageSteps.Enqueue(PageStep.UnLoad);      };   page.ProcessRequest(context);    Assert.True(requestContentEncodingSet);  Assert.True(responseContentEncodingSet);  Assert.True(rendered);    Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit);  Assert.Equal(pageSteps.Dequeue(), PageStep.Load);  Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender);  Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);    Mock.Assert(request);  Mock.Assert(response);   You can get the test class shown in this post here to give a try by yourself with of course JustMock :-).   Enjoy!!

    Read the article

  • Cowboy Agile?

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. I’ve often heard similar phrases around Scrum that clue me in to someone who doesn’t understand Scrum.  The phrases go something like this: “We don’t do Agile because the idea of letting people just do whatever they want is wrong.  We believe in a more structured approach.” (i.e. Work is Prison, and I’m the Warden!) “I love Agile.  Agile lets us do whatever we want!” (Cowboy Agile?) “We’re Agile, but we use a process that I’ve created.” (Cowboy Agile?) All of those phrases have one thing in common:  The assumption that Agile, and I mean Scrum, lets you do whatever you want.  This is simply not true. Executing Scrum properly requires more dedication, rigor, and diligence than happens in most traditional development methods. Scrum and Waterfall Compared Since Scrum and Waterfall are two of the most commonly used methodologies, a little bit of contrasting and comparing is in order. Waterfall Scrum A project manager defines all tasks and then manages the tasks that team members are working on. The team members define the tasks and estimates of the stories for the current iteration.  Any team member may work on any task in the iteration. Usually only a few milestones that need to be met, the milestones are measured in months, and these milestones are expected to be missed.  Little work is ever done to improve estimates and poor estimators can hide behind high estimates. Stories must be delivered every iteration, milestones are measured in hours, and the team is expected to figure out why their estimates were wrong, even when they were under.  Repeated misses can get the entire team fired. Partially completed work is normal. Partially completed work doesn’t count. Nobody knows the task you’re working on. Everyone knows what you’re working on, whether or not you’re making progress and how much longer you think its going to take, in hours. Little requirement to show working code.  Prototypes are ok. Working code must be shown each iteration.  No smoke and mirrors allowed.  Testing is done in lengthy cycles at the end of development.  Developers aren’t held accountable. Testing is part of the team.  If the testers don’t accept the story as complete, the team can’t count it.  Complete means that the story’s functionality works as designed.  The team can’t have any open defects on the story. Velocity is rarely truly measured and difficult to evaluate. Velocity is integral to the process and can be seen at a glance and everyone in the company knows what it is. A business analyst writes requirements.  Designers mock up screens.  Developers hide behind “I did it just like the spec doc told me to and made the screen exactly like the picture” Developers are expected to collaborate in real time.  If a design is bad or lacks needed details, the developers are required to get it right in the iteration, because all software must be functional.  Designers and Business Analysts are part of the team and must do their work in iterations slightly ahead of the developers. Upper Management is often surprised.  “You told me things were going well two months ago!” Management receives updates at the end of every iteration showing them exactly what the team did and how that compares to what' is remaining in the backlog.  Managers know every iteration what their money is buying. Status meetings are rare or don’t occur.  Email is a primary form of communication. Teams coordinate every single day with each other and use other high bandwidth communication channels to make sure they’re making progress.  Email is used only as a last resort.  Instead, team members stand up, walk to each other, and talk, face to face.  If that’s not possible, they pick up the phone. IF someone asks what happened, its at the end of a lengthy development cycle measured in months, and nobody really knows why it happened. Someone asks what happened every iteration.  The team talks about what happened, and then adapts to make sure that what happened either never happens again or happens every time.   That’s probably enough for now.  As you can see, a lot is required of Scrum teams! One of the key differences in Scrum is that the burden for many activities is shifted to a group of people who share responsibility, instead of a single person having responsibility.  This is a very good thing, since small groups usually come up with better and more insightful work than single individuals.  This shift also results in better velocity.  Team members can take vacations and the rest of the team simply picks up the slack.  With Waterfall, if a key team member takes a vacation, delays can ensue. Scrum requires much more out of every team member and as a result, Scrum teams outperform non-Scrum teams working 60 hour weeks. Recommended Reading Everyone considering Scrum should read Mike Cohn’s excellent book, User Stories Applied. Technorati Tags: Agile,Scrum,Waterfall

    Read the article

  • MSCC: Career & IT Fair 2014

    Already a couple of weeks ago, I've been addressed by Ibraahim and Yunus to see whether it would be interesting to participate in the 1st Career & IT Fair organised by the UoM Computer Club. Well, luckily we met at the Global Windows Azure Bootcamp and I wasn't too sure whether it would be possible for me to attend after all. The main reason is given because of work demand and furthermore due to the fact that the Mauritius Software Craftsmanship Community currently has no advertising material at all. Here's the brief statement of the event: "The UOM Students' Computer Club in collaboration with the UOM Students' Union and UOM CSE Department is organising a 'Career & IT Fair' on the 23rd and 24th April 2014. This event has for objective to provide a platform to tertiary students, secondary students as well as vocational students, the opportunity to meet job recruiters." Luckily, I was reminded that the 23rd is a Wednesday, and therefore I decided that it might be interesting to move our weekly Code & Coffee session to the university and hence be able to attend the career fair. As it turned out it was a great choice and thankfully Pritvi, Nadim as well as Ishwon volunteered to be around at the "community booth". Thankfully, the computer club gave us - the MSCC and the LUGM - one of their spaces in the lobby area of the Paul Octave Wiéhé Auditorium. My impression about the event Very well and professionally organised. Seriously, the lads over at the UoM Computer Club did a great job in organising their 2 days event, and felt very comfortable at any time. Actually, it was kind of amusing to some of the members constantly running around and checking everything. Even though that the whole process went smooth and easy off the hand. There were a couple of interesting pieces of information and announcements during the opening ceremony. For example, the Computer Science faculty is a very young one and has been initiated back in 1988 only - just by 4 staff members at that time. Now, after 25 years they have achieved quite a lot and there are currently 1.000+ active students attending the numerous lectures and courses. But there is no room to rest on previous achievements, and I was kind of surprised to hear that there are plans to extend the campus, and offer new lectures in the fields of nanotechnology, big data handling, and - crossing fingers - the introduction and establishment of a space control centre. Mauritius is already part of the Square Kilometre Array (SKA) and hopefully there will be more activities into that direction in the near future. Community - Awareness and collaboration As stated earlier, I could only spent one morning but luckily other members of the MSCC and the LUGM stayed during the whole two days and provided answers to any interested person. As for me, I took the opportunity to get in touch with the other companies in the lobby. Mainly, to create some awareness about our IT communities but also to see whether there might be options for future engagement in common activities, too. So far, I was able to speak to representatives of the following companies: ACCA Mauritius Business at Work Infomil LinkByNet Microsoft Indian Ocean Islands & French Pacific Spherinity Training Institute Spoon Consulting Ltd. State Informatics Ltd. Unfortunately, I only had a quick chat with an HR representative of LinkByNet but I fully count on our MSCC members like Nitin or LUGM member Ronny to spread our intentions over there.  So far, all of the representatives were really interested in our concepts and activities and I'm currently catching up with an introduction flyer for the MSCC that I'm going to send out to all those contacts via mail. It would be great to have more craftsmen as well as professional support on board. Some pictures from the event MSCC: Fantastic outlook for the near future. Announcements were made on Big data, nanotechnology, and space control centre in Mauritius. Interesting! MSCC: The lobby area was cramped with students. Great way to exchange and network. Good luck to all candidates! Passing the relay staff to... I recommend you to continue to read about the first Career & IT Fair on Ish's blog. He has a great summary and more details on those two days of IT activities than I have. Thanks and feel free to leave a comment (or two)... 

    Read the article

  • Sort Data in Windows Phone using Collection View Source

    - by psheriff
    When you write a Windows Phone application you will most likely consume data from a web service somewhere. If that service returns data to you in a sort order that you do not want, you have an easy alternative to sort the data without writing any C# or VB code. You use the built-in CollectionViewSource object in XAML to perform the sorting for you. This assumes that you can get the data into a collection that implements the IEnumerable or IList interfaces.For this example, I will be using a simple Product class with two properties, and a list of Product objects using the Generic List class. Try this out by creating a Product class as shown in the following code:public class Product {  public Product(int id, string name)   {    ProductId = id;    ProductName = name;  }  public int ProductId { get; set; }  public string ProductName { get; set; }}Create a collection class that initializes a property called DataCollection with some sample data as shown in the code below:public class Products : List<Product>{  public Products()  {    InitCollection();  }  public List<Product> DataCollection { get; set; }  List<Product> InitCollection()  {    DataCollection = new List<Product>();    DataCollection.Add(new Product(3,        "PDSA .NET Productivity Framework"));    DataCollection.Add(new Product(1,        "Haystack Code Generator for .NET"));    DataCollection.Add(new Product(2,        "Fundamentals of .NET eBook"));    return DataCollection;  }}Notice that the data added to the collection is not in any particular order. Create a Windows Phone page and add two XML namespaces to the Page.xmlns:scm="clr-namespace:System.ComponentModel;assembly=System.Windows"xmlns:local="clr-namespace:WPSortData"The 'local' namespace is an alias to the name of the project that you created (in this case WPSortData). The 'scm' namespace references the System.Windows.dll and is needed for the SortDescription class that you will use for sorting the data. Create a phone:PhoneApplicationPage.Resources section in your Windows Phone page that looks like the following:<phone:PhoneApplicationPage.Resources>  <local:Products x:Key="products" />  <CollectionViewSource x:Key="prodCollection"      Source="{Binding Source={StaticResource products},                       Path=DataCollection}">    <CollectionViewSource.SortDescriptions>      <scm:SortDescription PropertyName="ProductName"                           Direction="Ascending" />    </CollectionViewSource.SortDescriptions>  </CollectionViewSource></phone:PhoneApplicationPage.Resources>The first line of code in the resources section creates an instance of your Products class. The constructor of the Products class calls the InitCollection method which creates three Product objects and adds them to the DataCollection property of the Products class. Once the Products object is instantiated you now add a CollectionViewSource object in XAML using the Products object as the source of the data to this collection. A CollectionViewSource has a SortDescriptions collection that allows you to specify a set of SortDescription objects. Each object can set a PropertyName and a Direction property. As you see in the above code you set the PropertyName equal to the ProductName property of the Product object and tell it to sort in an Ascending direction.All you have to do now is to create a ListBox control and set its ItemsSource property to the CollectionViewSource object. The ListBox displays the data in sorted order by ProductName and you did not have to write any LINQ queries or write other code to sort the data!<ListBox    ItemsSource="{Binding Source={StaticResource prodCollection}}"   DisplayMemberPath="ProductName" />SummaryIn this blog post you learned that you can sort any data without having to change the source code of where the data comes from. Simply feed the data into a CollectionViewSource in XAML and set some sort descriptions in XAML and the rest is done for you! This comes in very handy when you are consuming data from a source where the data is given to you and you do not have control over the sorting.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Sort Data in Windows Phone using Collection View Source” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Must-see sessions at TCUK11

    - by Roger Hart
    Technical Communication UK is probably the best professional conference I've been to. Last year, I spoke there on content strategy, and this year I'll be co-hosting a workshop on embedded user assistance. Obviously, I'd love people to come along to that; but there are some other sessions I'd like to flag up for anybody thinking of attending. Tuesday 20th Sept - workshops This will be my first year at the pre-conference workshop day, and I'm massively glad that our workshop hasn't been scheduled along-side the one I'm really interested in. My picks: It looks like you're embedding user assistance. Would you like help? My colleague Dom and I are presenting this one. It's our paen to Clippy, to the brilliant idea he represented, and the crashing failure he was. Less precociously, we'll be teaching embedded user assistance, Red Gate style. Statistics without maths: acquiring, visualising and interpreting your data This doesn't need to do anything apart from what it says on the tin in order to be gold dust. But given the speakers, I suspect it will. A data-informed approach is a great asset to technical communications, so I'd recommend this session to anybody event faintly interested. The speakers here have a great track record of giving practical, accessible introductions to big topics. Go along. Wednesday 21st Sept - day one There's no real need to recommend the keynote for a conference, but I will just point out that this year it's Google's Patrick Hofmann. That's cool. You know what else is cool: Focus on the user, the rest follows An intro to modelling customer experience. This is a really exciting area for tech comms, and potentially touches on one of my personal hobby-horses: the convergence of technical communication and marketing. It's all part of delivering customer experience, and knowing what your users need lets you help them, sell to them, and delight them. Content strategy year 1: a tale from the trenches It's often been observed that content strategy is great at banging its own drum, but not so hot on compelling case studies. Here you go, folks. This is the presentation I'm most excited about so far. On a mission to communicate! Skype help their users communicate, but how do they communicate with them? I guess we'll find out. Then there's the stuff that I'm not too excited by, but you might just be. The standards geeks and agile freaks can get together in a presentation on the forthcoming ISO standards for agile authoring. Plus, there's a session on VBA for tech comms. I do have one gripe about day 1. The other big UK tech comms conference, UA Europe, have - I think - netted the more interesting presentation from Ellis Pratt. While I have no doubt that his TCUK case study on producing risk assessments will be useful, I'd far rather go to his talk on game theory for tech comms. Hopefully UA Europe will record it. Thursday 22nd Sept - day two Day two has a couple of slots yet to be confirmed. The rumour is that one of them will be the brilliant "Questions and rants" session from last year. I hope so. It's not ranting, but I'll be going to: RTFMobile: beyond stating the obvious Ultan O'Broin is an engaging speaker with a lot to say, and mobile is one of the most interesting and challenging new areas for tech comms. Even if this weren't a research-based presentation from a company with buckets of technology experience, I'd be going. It is, and you should too. Pattern recognition for technical communicators One of the best things about TCUK is the tendency to include sessions that tackle the theoretical and bring them towards the practical. Kai and Chris delivered cracking and well-received talks last year, and I'm looking forward to seeing what they've got for us on some of the conceptual underpinning of technical communication. Developing an interactive non-text learning programme Annoyingly, this clashes with Pattern Recognition, so I hope at least one of the streams is recorded again this year. The idea of communicating complex information without words us fascinating and this sounds like a great example of this year's third stream: "anything but text". For the localization and DITA crowds, there's rich pickings on day two, though I'm not sure how many of those sessions I'm interested in. In the 13:00 - 13:40 slot, there's an interesting clash between Linda Urban on re-use and training content, and a piece on minimalism I'm sorely tempted by. That's my pick of #TCUK11. I'll be doing a round-up blog after the event, and probably talking a bit more about it beforehand. I'm also reliably assured that there are still plenty of tickets.

    Read the article

  • My First Weeks at Red Gate

    - by Jess Nickson
    Hi, my name’s Jess and early September 2012 I started working at Red Gate as a Software Engineer down in The Agency (the Publishing team). This was a bit of a shock, as I didn’t think this team would have any developers! I admit, I was a little worried when it was mentioned that my role was going to be different from normal dev. roles within the company. However, as luck would have it, I was placed within a team that was responsible for the development and maintenance of Simple-Talk and SQL Server Central (SSC). I felt rather unprepared for this role. I hadn’t used many of the technologies involved and of those that I had, I hadn’t looked at them for quite a while. I was, nevertheless, quite excited about this turn of events. As I had predicted, the role has been quite challenging so far. I expected that I would struggle to get my head round the large codebase already in place, having never used anything so much as a fraction of the size of this before. However, I was perhaps a bit naive when it came to how quickly things would move. I was required to start learning/remembering a number of different languages and technologies within time frames I would never have tried to set myself previously. Having said that, my first week was pretty easy. It was filled with meetings that were designed to get the new starters up to speed with the different departments, ideals and rules within the company. I also attended some lightning talks being presented by other employees, which were pretty useful. These occur once a fortnight and normally consist of around four speakers. In my spare time, we set up the Simple-Talk codebase on my computer and I started exploring it and worked on my first feature – redirecting requests for URLs that used incorrect casing! It was also during this time that I was given my first introduction to test-driven development (TDD) with Michael via a code kata. Although I had heard of the general ideas behind TDD, I had definitely never tried it before. Indeed, I hadn’t really done any automated testing of code before, either. The session was therefore very useful and gave me insights as to some of the coding practices used in my team. Although I now understand the importance of TDD, it still seems odd in my head and I’ve yet to master how to sensibly step up the functionality of the code a bit at a time. The second week was both easier and more difficult than the first. I was given a new project to work on, meaning I was no longer using the codebase already in place. My job was to take some designs, a WordPress theme, and some initial content and build a page that allowed users of the site to read provided resources and give feedback. This feedback could include their thoughts about the resource, the topics covered and the page design itself. Although it didn’t sound the most challenging of projects when compared to fixing bugs in our current codebase, it nevertheless provided a few sneaky problems that had me stumped. I really enjoyed working on this project as it allowed me to play around with HTML, CSS and JavaScript; all things that I like working with but rarely have a chance to use. I completed the aims for the project on time and was happy with the final outcome – though it still needs a good designer to take a look at it! I am now into my third week at Red Gate and I have temporarily been pulled off the website from week 2. I am again back to figuring out the Simple-Talk codebase. Monday provided me with the chance to learn a bunch of new things: system level testing, Selenium and Python. I was set the challenge of testing a bug fix dealing with the search bars in Simple-Talk. The exercise was pretty fun, although Mike did have to point me in the right direction when I started making the tests a bit too complex. The rest of the week looks set to be focussed on pair programming with Mike as we work together on a new feature. I look forward to the challenges that still face me and hope that I will be able to get up to speed quickly. *fingers crossed*

    Read the article

  • Getting App.config to be configuration specific in VS2010

    - by MarkPearl
    I recently wanted to have a console application that had configuration specific settings. For instance, if I had two configurations “Debug” and “Release”, depending on the currently selected configuration I wanted it to use a specific configuration file (either debug or config). If you are wanting to do something similar, here is a potential solution that worked for me. Setting up a demo app to illustrate the point First, let’s set up an application that will demonstrate the most basic concept. using System; using System.Configuration; namespace ConsoleSpecificConfiguration { class Program { static void Main(string[] args) { Console.WriteLine("Config"); Console.WriteLine(ConfigurationManager.AppSettings["Example Config"]); Console.ReadLine(); } } }   This does a really simple thing. Display a config when run. To do this, you also need a config file set up. My default looks as follows… <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Default"/> </appSettings> </configuration>   Your entire solution will look as follows… Running the project you will get the following amazing output…   Let’s now say instead of having one config file we want depending on whether we are running in “Debug” or “Release” for the solution configuration we want different config settings to be propagated across you can do the following… Step 1 – Create alternate config Files First add additional config files to your solution. You should have some form of naming convention for these config files, I have decided to follow a similar convention to the one used for web.config, so in my instance I am going to add a App.Debug.config and a App.Release.config file BUT you can follow any naming convention you want provided you wire up the rest of the approach to use this convention. My files look as follows.. App.Debug.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Debug"/> </appSettings> </configuration>   App.Release.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="Example Config" value="Release"/> </appSettings> </configuration>   Your solution will now look as follows… Step 2 – Create a bat file that will overwrite files The next step is to create a bat file that will overwrite one file with another. If you right click on the solution in the solution explorer there will be a menu option to add new items to the solution. Create a text file called “copyifnewer.bat” which will be our copy script. It’s contents should look as follows… @echo off echo Comparing two files: %1 with %2 if not exist %1 goto File1NotFound if not exist %2 goto File2NotFound fc %1 %2 /A if %ERRORLEVEL%==0 GOTO NoCopy echo Files are not the same. Copying %1 over %2 copy %1 %2 /y & goto END :NoCopy echo Files are the same. Did nothing goto END :File1NotFound echo %1 not found. goto END :File2NotFound copy %1 %2 /y goto END :END echo Done. Your solution should now look as follows…   Step 3 – Customize the Post Build event command line We now need to wire up everything – which we will do using the post build event command line in VS2010. Right click on your project and go to it’s properties We are now going to wire up the script so that when we build our project it will overwrite the default App.config with whatever file we want. The syntax goes as follows… call "$(SolutionDir)copyifnewer.bat" "$(ProjectDir)App.$(ConfigurationName).config" "$(ProjectDir)$(OutDir)\$(TargetFileName).config" Testing it If I now change my project configuration to Release   And then run my application I get the following output… Toggling between Release and Debug mode will show that the config file is changing each time. And that is it!

    Read the article

  • Ok it has been pointed out to me

    - by Ratman21
    That it seems my blog is more of poor me or pity me or I deserve a job blog.   Hmmm I wont say, I have not wined here as I have used this blog to vent my frustration on the whole out of work thing (lack of money, self worth, family issues and the never end bills coming my way) but, it was also me trying to reach to others in the same boat as well as advertising, hay I am out here, employers.   It was also said, that I don’t have any thing listed here on me, like a cover letter or resume. Well there is but, it was so many months and post ago. Also what I had posted is not current. So here is my most current cover and resume.   Scott L Newman 45219 Dutton Way Callahan, Fl. 32011 To Whom It May Concern: I am really interested in the IT vacancie that you have listed for your company. Maybe I don’t have all the qualifications you want (hold on don’t hit delete yet) yet! But maybe I do, as I have over 20 + years experience in "IT” RIGHT NOW.   Read the rest of my cover and my resume. You will see what my “IT” skills are and it will Show that I can to this work! I can bring to your company along with my, can do attitude, a broad range of skills, including: Certified CompTIA A+, Security+  and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem  non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I don’t just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes but, I did it.) I would welcome the opportunity to further discuss this position with you. If you have questions or would like to schedule an interview, please contact me by phone at 904-879-4880 or on my cell 352-356-0945 or by e-mail at [email protected] or leave a message on my web site (http://beingscottnewman.webs.com/). I have enclosed/attached my resume for your review and I look forward to hearing from you.   Thank you for taking a moment to consider my cover letter and resume. I appreciate how busy you are. Sincerely, Scott L. Newman    Scott L. Newman 45219 Dutton Way, Callahan, FL 32011? H (904)879-4880 C (352)356-0945 ? [email protected] Web - http://beingscottnewman.webs.com/                                                       ______                                                                                       OBJECTIVE To obtain a Network Operation or Helpdesk position.     PROFILE Information Technology Professional with 20+ years of experience. Volunteer website creator and back-up sound technician at True Faith Christian Fellowship. CompTIA A+, Network+ and Security+ Certified.   TECHNICAL AND PROFESSIONAL SKILLS   §         Technical Support §         Frame Relay §         Microsoft Office Suite §         Inventory Management §         ISDN §         Windows NT/98/XP §         Client/Vendor Relations §         CICS §         Cisco Routers/Switches §         Networking/Administration §         RPG §         Helpdesk §         Website Design/Dev./Management §         Assembler §         Visio §         Programming §         COBOL IV §               EDUCATION ? New HorizonsComputerLearningCenter, Jacksonville, Florida – CompTIA A+, Security+ and Network+ Certified.             Currently working on CCNA Certification ?MottCommunity College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese     PROFESSIONAL             TrueFaithChristianFellowshipChurch – Callahan, FL, October 2009 – Present Web site Tech ·        Web site Creator/tech, back up song leader and back up sound technician. Note church web site is (http://ambassadorsforjesuschrist.webs.com/) U.S. Census (temp employee) Feb. 23 to March 8, 2010 ·        Enumerator for NassauCounty   ThomasCreekBaptistChurch – Callahan, FL,     June 2008 – September 2009 Churchsound and video technician      ·        sound and video technician           Fidelity National Information Services ? Jacksonville, FL ? February 01, 2005 to October 28, 2008 Client Server Dev/Analyst I ·        Monitored Multiple Debit Card sites, Check Authorization customers and the Card Auth system (AuthNet) for problems with the sites, connections, servers (on our LAN) and/or applications ·        Night (NOC) Network operator for a large Wide Area Network (WAN) ·        Monitored Multiple Check Authorization customers for problems with circuits, routers and applications ·        Resolved circuit and/or router issues or assist circuit carrier in resolving issue ·        Resolved application problems or assist application support in resolution ·        Liaison between customer and application support ·        Maintained and updated the NetOps Operation procedures Guide ·        Kept the listing of equipment on the raised floor updated ·        Involved in the training of all Night Check and Card server operation operators ·        FNIS acquired Certegy in 2005. Was one of 3 kept on.   Certegy ? St.Pete, FL ? August 31, 2003 to February 1, 2005 Senior NetOps Operator(FNIS acquired Certegy in 2005 all of above jobs/skills were same as listed in FNIS) ·        Converting Documentation to Adobe format ·        Sole trainer of day/night shift System Management Center operators (SMC) ·        Equifax spun off Card/Check Dept. as Certegy. Certegy terminated contract with EDS. One of six in the whole IT dept that was kept on.   EDS  (Certegy Account) ? St.Pete, FL ? July 1, 1999 to August 31, 2003 Senior NetOps Operator ·        Equifax outsourced the NetOps dept. to EDS in 1999. ·        Same job skills as listed above for FNIS.   Equifax ? St.Pete&Tampa, FL ? January 1, 1991 to July 1, 1999 NetOps/Tandem Operator ·        All of the above for FNIS, except for circuit and router issues ·        Operated, monitored and trouble shot Tandem mainframe and servers on LAN ·        Supported in the operation of the Print, Tape and Microfiche rooms ·        Equifax acquired TelaCredit in 1991.   TelaCredit ? Tampa, FL ? June 28, 1989 to January 1, 1991 Tandem Operator ·        Operated and monitored Tandem Non-stop systems for Card and Check Auths ·        Operated multiple high-speed Laser printers and Microfiche printers ·        Mounted, filed and maintained 18 reel-to-reel mainframe tape drives, cartridges tape drives and tape library.

    Read the article

  • Unexpected behaviour with glFramebufferTexture1D

    - by Roshan
    I am using render to texture concept with glFramebufferTexture1D. I am drawing a cube on non-default FBO with all the vertices as -1,1 (maximum) in X Y Z direction. Now i am setting viewport to X while rendering on non default FBO. My background is blue with white color of cube. For default FBO, i have created 1-D texture and attached this texture to above FBO with color attachment. I am setting width of texture equal to width*height of above FBO view-port. Now, when i render this texture to on another cube, i can see continuous white color on start or end of each face of the cube. That means part of the face is white and rest is blue. I am not sure whether this behavior is correct or not. I expect all the texels should be white as i am using -1 and 1 coordinates for cube rendered on non-default FBO. code: #define WIDTH 3 #define HEIGHT 3 GLfloat vertices8[]={ 1.0f,1.0f,1.0f, -1.0f,1.0f,1.0f, -1.0f,-1.0f,1.0f, 1.0f,-1.0f,1.0f,//face 1 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 2 1.0f,1.0f,1.0f, 1.0f,-1.0f,1.0f, 1.0f,-1.0f,-1.0f, 1.0f,1.0f,-1.0f,//face 3 -1.0f,1.0f,1.0f, -1.0f,1.0f,-1.0f, -1.0f,-1.0f,-1.0f, -1.0f,-1.0f,1.0f,//face 4 1.0f,1.0f,1.0f, 1.0f,1.0f,-1.0f, -1.0f,1.0f,-1.0f, -1.0f,1.0f,1.0f,//face 5 -1.0f,-1.0f,1.0f, -1.0f,-1.0f,-1.0f, 1.0f,-1.0f,-1.0f, 1.0f,-1.0f,1.0f//face 6 }; GLfloat vertices[]= { 0.5f,0.5f,0.5f, -0.5f,0.5f,0.5f, -0.5f,-0.5f,0.5f, 0.5f,-0.5f,0.5f,//face 1 0.5f,-0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 2 0.5f,0.5f,0.5f, 0.5f,-0.5f,0.5f, 0.5f,-0.5f,-0.5f, 0.5f,0.5f,-0.5f,//face 3 -0.5f,0.5f,0.5f, -0.5f,0.5f,-0.5f, -0.5f,-0.5f,-0.5f, -0.5f,-0.5f,0.5f,//face 4 0.5f,0.5f,0.5f, 0.5f,0.5f,-0.5f, -0.5f,0.5f,-0.5f, -0.5f,0.5f,0.5f,//face 5 -0.5f,-0.5f,0.5f, -0.5f,-0.5f,-0.5f, 0.5f,-0.5f,-0.5f, 0.5f,-0.5f,0.5f//face 6 }; GLuint indices[] = { 0, 2, 1, 0, 3, 2, 4, 5, 6, 4, 6, 7, 8, 9, 10, 8, 10, 11, 12, 15, 14, 12, 14, 13, 16, 17, 18, 16, 18, 19, 20, 23, 22, 20, 22, 21 }; GLfloat texcoord[] = { 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0 }; glGenTextures(1, &id1); glBindTexture(GL_TEXTURE_1D, id1); glGenFramebuffers(1, &Fboid); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexImage1D(GL_TEXTURE_1D, 0, GL_RGBA, WIDTH*HEIGHT , 0, GL_RGBA, GL_UNSIGNED_BYTE,0); glBindFramebuffer(GL_FRAMEBUFFER, Fboid); glFramebufferTexture1D(GL_DRAW_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_1D,id1,0); draw_cube(); glBindFramebuffer(GL_FRAMEBUFFER, 0); draw(); } draw_cube() { glViewport(0, 0, WIDTH, HEIGHT); glClearColor(0.0f, 0.0f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(temp.psId,"position")); glVertexAttribPointer(glGetAttribLocation(temp.psId,"position"), 3, GL_FLOAT, GL_FALSE, 0,vertices8); glDrawArrays (GL_TRIANGLE_FAN, 0, 24); } draw() { glClearColor(1.0f, 0.0f, 0.0f, 1.0f); glClearDepth(1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"tk_position")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"tk_position"), 3, GL_FLOAT, GL_FALSE, 0,vertices); nResult = GL_ERROR_CHECK((GL_NO_ERROR, "glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0,vertices);")); glEnableVertexAttribArray(glGetAttribLocation(shader_data.psId,"inputtexcoord")); glVertexAttribPointer(glGetAttribLocation(shader_data.psId,"inputtexcoord"), 2, GL_FLOAT, GL_FALSE, 0,texcoord); glBindTexture(*target11, id1); glDrawElements ( GL_TRIANGLES, 36,GL_UNSIGNED_INT, indices ); when i change WIDTH=HEIGHT=2, and call a glreadpixels with height, width equal to 4 in draw_cube() i can see first 2 pixels with white color, next two with blue(glclearcolor), next two white and then blue and so on.. Now when i change width parameter in glTeximage1D to 16 then ideally i should see alternate patches of white and blue right? But its not the case here. why so?

    Read the article

  • More Quick Interview Tips

    - by Ajarn Mark Caldwell
    In the last couple of years I have conducted a lot of interviews for application and database developers for my company, and I can tell you that the little things can mean a lot.  Here are a few quick tips to help you make a good first impression. A year ago I gave you my #1 interview tip: Do some basic research!  And a year later, I am still stunned by how few technical people do the most basic of research.  I can only guess that it is because it is so engrained in our psyche that technical competence is everything (see How to Manage Technical Employees for more on this idea) that we forget or ignore the importance of soft skills and the art of the interview.  Or maybe it is because we have heard the stories of the uber-geek who has zero personal skills but still makes a fortune working for Microsoft.  Well, here’s another quick tip:  You’re probably not as good as he is; and a large number of companies actually run small to medium sized teams and can’t really afford to have the social outcast in the group.  In a small team, everyone has to get along well, and that’s an important part of what I’m evaluating during the interview process. My #2 tip is to act alive!  I typically conduct screening interviews by phone before I bring someone in for an in-person.  I don’t care how laid-back you are or if you have a “quiet personality”, when we are talking, ACT like you are happy I called and you are interested in getting the job.  If you sound like you are bored-to-death and that you would be perfectly happy to never work again, I am perfectly happy to help you attain that goal, and I’ll move on to the next candidate. And closely related to #2, perhaps we’ll call it #2.1 is this tip:  When I call you on the phone for the interview, don’t answer your phone by just saying, “Hello”.  You know that the odds are about 999-to-1 that it is me calling for the interview because we have specifically arranged this time slot for the call.  And you can see on the caller ID that it is not one of your buddies calling, so identify yourself.  Don’t make me question whether I dialed the right number.  Answer your phone with a, “Hello, this is ___<your full name preferred, but at least your first name>___.”.  And when I say, “Hi, <your name>, this is Mark from <my company>” it would be really nice to hear you say, “Hi, Mark, I have been expecting your call.”  This sets the perfect tone for our conversation.  I know I have the right person; you are professional enough and interested enough in the job or contract to remember your appointments; and now we can move on to a little intro segment and get on with the reason for our call. As crazy as it sounds, I’ve actually had phone interviews that went like this: <Ring…> You:  “Hello?” Me:  “Hi, this is Mark from _______” You:  “Yeah?” Me:  “Is this <your name>?” You:  “Yeah.” Me:  “I had this time in my calendar for us to talk…were you expecting my call?” You:  “Oh, yeah, sure…” I used to be nice and would try to go ahead with the interview even after this bad start, thinking I was giving the candidate the benefit of the doubt…a second chance…but more often than not it was a struggle and 10 minutes into what was supposed to be a 45-minute call, I’m looking for a way to hang up without being rude myself.  It never worked out.  I never brought that person in for an in-person interview, much less offered them the job or contract.  Who knows, maybe they were some sort of wunderkind that we missed out on.  What I know is that they would never fit in with the rest of the team, and around here that is absolutely critical. So, in conclusion… Act alive!  Identify yourself!  And do at least the very basic of research.

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • Skype support and dead parrot sketches

    - by Greg Low
    We as an industry have a lot to answer for. One thing is the level of support provided for users. Here's the conversation I'm having with Skype right now. It feels like being in a Monty Python skit. I was really hoping that when Microsoft purchased Skype that things might improve. 8:33:44 PM greglowInitial Question/Comment: Subscriptions (Unlimited Country, Unlimited Europe, Unlimited Region, Unlimited World)8:33:44 PM SystemThank you for contacting Skype Customer Support!8:33:49 PM SystemDonna Faye has joined this session!8:33:50 PM SystemConnected with Donna Faye8:33:50 PM SystemThank you for contacting Skype Customer Support!8:33:54 PM SystemPlease hold for the next available Live Support Agent.8:33:59 PM Donna FayeHello! Welcome to Skype Live Support! My name is Donna L. How may I help you?8:34:09 PM greglowI've got a subscription (skypename greglow)8:34:21 PM greglowIt's set to auto-renew8:34:45 PM greglowWhy have I been getting messages about my Skype number expiring, if it's set to renew automatically8:34:54 PM greglowand has recently renewed automatically8:35:09 PM Donna FayeThank you for the information.8:35:29 PM Donna FayeTo better assist you may I know your Skype name, please?8:35:37 PM greglowgreglow8:35:44 PM Donna FayeThank you.8:36:14 PM Donna FayeI understand your concern that you receive an email notification stating that your calling subscription will about to expire. Am I right?8:36:43 PM greglowNo, I got an email telling me that one of my Skype numbers was going to expire8:37:08 PM greglowWhen I went to the website, it said it was part of the subscription and the subscription was set to auto-renew8:37:18 PM greglowThe subscription auto-renewed on Oct XXth8:37:24 PM greglowBut the number still expired8:37:43 PM greglowWhen I logged on today, it said that the number had expired and that I had 52 days left to reactivate it8:37:58 PM greglowI did that but am trying to understand why that happens at all8:38:36 PM greglowWhy do you expire numbers that are part of auto-renewing subscriptions?8:39:49 PM Donna FayeUpon checking your account, your calling subscription link to three Online Numbers.8:39:55 PM greglowcorrect8:40:10 PM greglowbut until an hour or so ago, one of them had expired. why?8:40:54 PM Donna FayeLet me explain to you that now Online Number are detach to a calling subscription unlike before it was attach to the calling subscription so therefore each Online Number will recur individually.8:41:29 PM Donna FayeUpon checking your account it shows that all your Online Number will expire on October XX, 20138:41:44 PM Donna FayeIf you don't want to be charged again for you can cancel it anytime.8:41:59 PM greglowI have set it to charge automatically8:42:06 PM greglowso why do the numbers expire?8:42:32 PM Donna FayeNo they are not expired they are all active.8:42:33 PM greglowif there is an automatic payment set up, why does anything expire?8:42:52 PM greglowyes, but one of them was expired, and I only fixed it a few hours ago8:43:07 PM greglowI'm trying to understand why it expired8:44:54 PM Donna FayeThe email you got just notify you that you should have good enough funding source of your calling subscription and Online Number in order for this to recur.8:45:12 PM greglowAgreed. I did that but the number still expired. Why?8:45:38 PM Donna FayeHowever do not worry on this hence all Online Number are active and not expired.8:45:41 PM greglowWhen I logged on today it said that the number had expired and that I had 52 days left to reactivate it. Why did that happen?8:46:14 PM Donna FayeMay I know the Online Number you are pertaining, please?The number that had expired was (07) XXXX XXXX (Australia)8:49:19 PM Donna FayeOkay, do not worry on this Online Number because this will expire on October XX, 2013.8:49:38 PM Donna FayeThank you for all the information.8:49:46 PM greglowI understand that it won't expire now until next year. I'm trying to understand why it stopped working this time8:49:51 PM greglowso I can avoid that next time8:50:39 PM Donna FayeWhat do yo stop working, you mean that Online Number is unable to be reach out?8:50:59 PM Donna Faye*what do you mean by stop working8:51:18 PM greglowThe Skype number +61 7 XXXX XXXX stopped working because it expired8:51:32 PM Donna FayeNo, this into expired.8:51:34 PM greglowI'm trying to understand why it expired8:51:50 PM greglowSorry I don't follow "this into expired"8:52:39 PM Donna FayeLet me inform you that this Online Number is not expired.8:52:54 PM Donna FayePlease disregard the notification you see on your Skype account.8:53:02 PM greglowSorry, this is getting very frustrating. I already told you I fixed it today. I want to know why it happened.8:53:20 PM Donna FayeHence I can rest assured you that this is active until October XX, 2013.8:53:21 PM greglowSo I can avoid it happening again, on this account, or on our other accounts8:53:34 PM greglowCan I speak to someone who really understands this please?Donna FayeSkype send a notification to Skype users for their Skype product in order for them to be aware the expiration of your product however your Online Number still active on your account hence when your calling subscription Unlimited World recur your Online Number also recur. 8:57:59 PM Donna Faye I do apologize for this matter, please allow me to resolve this issue for you. 9:02:48 PM greglow My question is still the same: if it's set to auto-pay, why did it expire? 9:03:50 PM greglow Why did it stop working? Why did I have to "reactivate" it? 9:04:08 PM greglow Why didn't it just keep working if it's set to auto-pay? 9:04:30 PM greglow This isn't a difficult concept 9:04:34 PM Donna Faye Let me clarify to you that this is not the really expired what is emphasizing on this is when you don't have good enough funding source when you calling subscription recur the Online Number will possibly expired but seem that you have a good funding source then this will continue until 2013. 9:05:10 PM greglow When I logged on today, it was not working, and your website said it had expired and that I had 52 days left to reactivate it 9:05:17 PM greglow Why?and so on, and so on, and so on.... 

    Read the article

  • Override an IOCTL Handler in PQOAL

    - by Kate Moss' Big Fan
    When porting or creating a BSP to a new platform, we often need to make change to OEMIoControl or HAL IOCTL handler for more specific. Since Microsoft introduced PQOAL in CE 5.0 and more and more BSP today leverages PQOAL to simplify the OAL, we no longer define the OEMIoControl directly. It is somehow analogous to migrate from pure Windows SDK to MFC; people starts to define those MFC handlers and forgot the WinMain and the big message loop. If you ever take a look at the interface between OAL and Kernel, PUBLIC\COMMON\OAK\INC\oemglobal.h, the pfnOEMIoctl is still there just as the entry point of Windows Program is WinMain since day one. (For those may argue about pfnOEMIoctl is not OEMIoControl, I will encourage you to dig into PRIVATE\WINCEOS\COREOS\NK\OEMMAIN\oemglobal.c which initialized pfnOEMIoctl to OEMIoControl. The interface is just to split OAL and Kernel which no longer linked to one executable file in CE 6, all of the function signature is still identical) So let's trace into PQOAL to realize how it implements OEMIoControl and how can we override an IOCTL handler we interest. First thing to know is the entry point (just as finding the WinMain in MFC), OEMIoControl is defined in PLATFORM\COMMON\SRC\COMMON\IOCTL\ioctl.c. Basically, it does nothing special but scan a pre-defined IOCTL table, g_oalIoCtlTable, and then execute the handler. (The highlight part) Other than that is just for error handling and the use of critical section to serialize the function. BOOL OEMIoControl(     DWORD code, VOID *pInBuffer, DWORD inSize, VOID *pOutBuffer, DWORD outSize,     DWORD *pOutSize ) {     BOOL rc = FALSE;     UINT32 i; ...     // Search the IOCTL table for the requested code.     for (i = 0; g_oalIoCtlTable[i].pfnHandler != NULL; i++) {         if (g_oalIoCtlTable[i].code == code) break;     }     // Indicate unsupported code     if (g_oalIoCtlTable[i].pfnHandler == NULL) {         NKSetLastError(ERROR_NOT_SUPPORTED);         OALMSG(OAL_IOCTL, (             L"OEMIoControl: Unsupported Code 0x%x - device 0x%04x func %d\r\n",             code, code >> 16, (code >> 2)&0x0FFF         ));         goto cleanUp;     }            // Take critical section if required (after postinit & no flag)     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Take critical section                    EnterCriticalSection(&g_ioctlState.cs);     }     // Execute the handler     rc = g_oalIoCtlTable[i].pfnHandler(         code, pInBuffer, inSize, pOutBuffer, outSize, pOutSize     );     // Release critical section if it was taken above     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Release critical section                    LeaveCriticalSection(&g_ioctlState.cs);     } cleanUp:     OALMSG(OAL_IOCTL&&OAL_FUNC, (L"-OEMIoControl(rc = %d)\r\n", rc ));     return rc; }   Where is the g_oalIoCtlTable? It is defined in your BSP. Let's use DeviceEmulator BSP as an example. The PLATFORM\DEVICEEMULATOR\SRC\OAL\OALLIB\ioctl.c defines the table as const OAL_IOCTL_HANDLER g_oalIoCtlTable[] = { #include "ioctl_tab.h" }; And that leads to PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h which defined some of IOCTL handler but others are defined in oal_ioctl_tab.h which is under PLATFORM\COMMON\SRC\INC\. Finally, we got the full table body! (Just like tracing MFC, always jumping back and forth). The format of table is very straight forward, IOCTL code, Flags and Handler Function // IOCTL CODE,                          Flags   Handler Function //------------------------------------------------------------------------------ { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlHalInitRegistry     }, { IOCTL_HAL_INIT_RTC,                       0,  OALIoCtlHalInitRTC          }, { IOCTL_HAL_REBOOT,                         0,  OALIoCtlHalReboot           }, The PQOAL scans through the table until it find a matched IOCTL code, then invokes the handler function. Since it scans the table from the top which means if we define TWO handler with same IOCTL code, the first one is always invoked with no exception. Now back to the PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h, with the following table { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlDeviceEmulatorHalInitRegistry     }, ... #include <oal_ioctl_tab.h> Note the IOCTL_HAL_INITREGISTRY handler are defined in both BSP's local ioctl_tab.h and the common oal_ioctl_tab.h, but due to BSP's local handler comes before "#include <oal_ioctl_tab.h>" so we know the OALIoCtlDeviceEmulatorHalInitRegistry always get called. In this example, the DeviceEmulator BSP overrides the IOCTL_HAL_INITREGISTRY handler from OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry by manipulating the g_oalIoCtlTable table. (In some point of view, it is similar to message map in MFC) Please be aware, when you override an IOCTL handler in PQOAL, you may want to clone the original implementation to your BSP and change to meet your need. It is recommended and save you the redundant works but remember to rename the handler function (Just like the DeviceEmulator it changes the name of OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry). If you don't change the name, linker may not be happy (due to name conflict) and the more important is by using different handler name, you could always redirect the handler back to original one. (It is like the concept of OOP that calling a function in base class; still not so clear? I am goinf to show you soon!) The OALIoCtlDeviceEmulatorHalInitRegistry setups DeviceEmulator specific registry settings and in the end, if everything goes well, it calls the OALIoCtlHalInitRegistry (PLATFORM\COMMON\SRC\COMMON\IOCTL\reginit.c) to do the rest.     if(fOk) {         fOk = OALIoCtlHalInitRegistry(code, pInpBuffer, inpSize, pOutBuffer,             outSize, pOutSize);     } Now you got the picture, whenever you want to override an IOCTL hadnler that is implemented in PQOAL just Clone the handler function to your BSP as a template. Simple name change for the handler function, and a name change in the IOCTL table header file that maps the IOCTL with the function Implement your IOCTL handler and whenever you need to redirect it back just calling the original handler function. It is the standard way of implementing a custom IOCTL and most Microsoft developers prefer. The mapping of IOCTL routine to IOCTL code is platform specific - you control the header file that does that mapping.

    Read the article

  • SQLAuthority News – A Conversation with an Old Friend – Sri Sridharan

    - by pinaldave
    Sri Sridharan is my old friend and we often talk on GTalk. The subject varies from Life in India/USA, movies, musics, and of course SQL. We have our differences when we talk about food or movie but we always agree when we talk about SQL. Yesterday while chatting with him we talked about SQLPASS and the conversation lasted for a long time. Here is the conversation between us on GTalk. I have removed a few of the personal talks and formatted into paragraphs as GTalk often shows stuff out of formatting. Pinal: Sri, Congrats on running for the PASS BoD again. You were so close last year. What made you decide to run again this year? Sri: Thank you Pinal for your leadership in the PASS India Community and all the things you do out there. After coming so close last year, there was no doubt in my mind that I will run again. I was truly humbled by the support I got from the community. Growing up in India for over 25 years, you are brought up in a very competitive part of the world. Right from the pressure of staying in the top of the class from kindergarten to your graduation, the relentless push from your parents about studying and getting good grades (and nothing else matters), you land up essentially living in a pressure cooker. To survive that relentless pressure, you need to have a thick skin, ability to stand up for who you really are , what you want to accomplish and in the process stay true those values. I am striving for a greater cause, to make PASS an organization that can help people with their SQL Server careers, to make PASS relevant to its chapter members, to make PASS an organization that every SQL professional in the world wants to be connected with. Just because I did not get elected or appointed last year does not mean that these causes are not worth fighting. Giving up upon failing the first time is simply not in me. If I did that, what message would I send to those who voted for me? What message would I send to my kids? Pinal: As someone who has such strong roots in India, what can the Indian PASS Community expect from you? Sri: First of all, I think fostering a regional leadership is something PASS must encourage as part of its global growth plan. For PASS global being able to understand all the issues in a region of the world and make sound decisions will be a tough thing to do on a continuous basis. I expect people like you, chapter leaders, regional mentors, MVPs of the region start playing a bigger role in shaping the next generation of PASS. That is something I said in my campaign and I still stand by it. I would like to see growth in the number of chapters in India. The current count does not truly represent the full potential of that region. I was pretty thrilled to see the Bangalore SQLSaturday happen early this year. I would like to see more of SQLSaturday events, at least in the major metro cities. I know the issues in India are very different from the rest of the world. So the formula needs to be tweaked a little for it to work better in India. Once the SQLSaturday model is vetted out, maybe there could be enough justification to have SQLRally India. PASS needs to have a premier SQL event in that region. Going to USA or Europe for that matter is incredibly hard due to VISA issues etc. So this could be a case of where PASS comes closer to where the community is. Pinal: What portfolio would take on if you are elected to the PASS Board? Sri: There are some very strong folks on the PASS Board today. The President discusses the portfolios with the group and makes the final call on the portfolios. I am also a fan of having a team associated with the portfolios. In that case, one person is the primary for a portfolio but secondary on a couple of other portfolios. This way people on the board have a direct vested interest in a few portfolios. Personally, I know I would these portfolios good justice – Chapters, Global Growth and Events (SQLSat, SQLRally). I would try to see if we can get a director to focus on Volunteers.  To me that is very critical for growth in the international regions. Pinal: This is an interesting conversation with you Sri. I know you so long time but this is indeed inspiring to many. India is a big country and we appreciate your thoughts. Sri: Thank you very much for taking time to chat with me today. Cheers. There are pretty strong candidates for SQLPASS Board of Elections this year. I know all of them in person and honestly it is going to be extremely difficult to not to vote for anybody. I am indeed in a crunch right now how to pick one over another. Though the choice is difficult, I encourage you to vote for them. I am extremely confident that the new board of directors will all have the same goal – Better SQL Server Community. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 2

    - by Simon Cooper
    Before we look any further at the CLR metadata, we need a quick diversion to understand how the metadata is actually stored. Encoding table information As an example, we'll have a look at a row in the TypeDef table. According to the spec, each TypeDef consists of the following: Flags specifying various properties of the class, including visibility. The name of the type. The namespace of the type. What type this type extends. The field list of this type. The method list of this type. How is all this data actually represented? Offset & RID encoding Most assemblies don't need to use a 4 byte value to specify heap offsets and RIDs everywhere, however we can't hard-code every offset and RID to be 2 bytes long as there could conceivably be more than 65535 items in a heap or more than 65535 fields or types defined in an assembly. So heap offsets and RIDs are only represented in the full 4 bytes if it is required; in the header information at the top of the #~ stream are 3 bits indicating if the #Strings, #GUID, or #Blob heaps use 2 or 4 bytes (the #US stream is not accessed from metadata), and the rowcount of each table. If the rowcount for a particular table is greater than 65535 then all RIDs referencing that table throughout the metadata use 4 bytes, else only 2 bytes are used. Coded tokens Not every field in a table row references a single predefined table. For example, in the TypeDef extends field, a type can extend another TypeDef (a type in the same assembly), a TypeRef (a type in a different assembly), or a TypeSpec (an instantiation of a generic type). A token would have to be used to let us specify the table along with the RID. Tokens are always 4 bytes long; again, this is rather wasteful of space. Cutting the RID down to 2 bytes would make each token 3 bytes long, which isn't really an optimum size for computers to read from memory or disk. However, every use of a token in the metadata tables can only point to a limited subset of the metadata tables. For the extends field, we only need to be able to specify one of 3 tables, which we can do using 2 bits: 0x0: TypeDef 0x1: TypeRef 0x2: TypeSpec We could therefore compress the 4-byte token that would otherwise be needed into a coded token of type TypeDefOrRef. For each type of coded token, the least significant bits encode the table the token points to, and the rest of the bits encode the RID within that table. We can work out whether each type of coded token needs 2 or 4 bytes to represent it by working out whether the maximum RID of every table that the coded token type can point to will fit in the space available. The space available for the RID depends on the type of coded token; a TypeOrMethodDef coded token only needs 1 bit to specify the table, leaving 15 bits available for the RID before a 4-byte representation is needed, whereas a HasCustomAttribute coded token can point to one of 18 different tables, and so needs 5 bits to specify the table, only leaving 11 bits for the RID before 4 bytes are needed to represent that coded token type. For example, a 2-byte TypeDefOrRef coded token with the value 0x0321 has the following bit pattern: 0 3 2 1 0000 0011 0010 0001 The first two bits specify the table - TypeRef; the other bits specify the RID. Because we've used the first two bits, we've got to shift everything along two bits: 000000 1100 1000 This gives us a RID of 0xc8. If any one of the TypeDef, TypeRef or TypeSpec tables had more than 16383 rows (2^14 - 1), then 4 bytes would need to be used to represent all TypeDefOrRef coded tokens throughout the metadata tables. Lists The third representation we need to consider is 1-to-many references; each TypeDef refers to a list of FieldDef and MethodDef belonging to that type. If we were to specify every FieldDef and MethodDef individually then each TypeDef would be very large and a variable size, which isn't ideal. There is a way of specifying a list of references without explicitly specifying every item; if we order the MethodDef and FieldDef tables by the owning type, then the field list and method list in a TypeDef only have to be a single RID pointing at the first FieldDef or MethodDef belonging to that type; the end of the list can be inferred by the field list and method list RIDs of the next row in the TypeDef table. Going back to the TypeDef If we have a look back at the definition of a TypeDef, we end up with the following reprensentation for each row: Flags - always 4 bytes Name - a #Strings heap offset. Namespace - a #Strings heap offset. Extends - a TypeDefOrRef coded token. FieldList - a single RID to the FieldDef table. MethodList - a single RID to the MethodDef table. So, depending on the number of entries in the heaps and tables within the assembly, the rows in the TypeDef table can be as small as 14 bytes, or as large as 24 bytes. Now we've had a look at how information is encoded within the metadata tables, in the next post we can see how they are arranged on disk.

    Read the article

  • Process Centric Banking: Loan Origination Solution

    - by Manish Palaparthy
    There is an old proverb that goes, "The difference between theory and practice is greater in practice than in theory". So, we keep doing numerous "Proof of Concepts" with our own products on various business cases to analyze them deeply, understand and explain to our customers. We then present our learnings as they happened. The awareness of each PoC should help readers increase the trustworthiness of the results coming out of these PoCs. I present one such PoC where we invested a lot of time&effort.  Process Centric Banking : Loan Origination Solution Loan Origination is a process by which a borrower applies for a new loan and the lender processes that application. Loan origination includes the series of steps taken by the bank from the point the customer shows interest in a loan product all the way to disbursal of funds. The Loan Origination process is relevant for many kind of lenders in Financial services: Banks, Credit Unions, NBFCs(Non Banking Financial Companies) and so on. For simplicity sake, I will use "Bank" as the lending institution in the rest of my article.  Loan Origination is one of the core processes for Banks as it is the process by which the it creates assets against which the Institution earns most of its profits from. A well tuned loan origination process can affect the Bank in many positive ways. Banks have always shown great interest in automating the loan origination process for the above reason. However, due the constant changes in customer environment, market dynamics, prevailing economic conditions, cost pressures & regulatory environment they run into lot of challenges. Let me categorize some of these challenges for you Customer Environment Multiple Channels: Customer can use any of the available channels (Internet Banking, Email, Fax, Branch, Phone Banking, ATM, Broker, Mobile, Snail Mail) to perform all or some of the activities related to her Visibility into the origination process: Expect immediate update on the status of loan processing & alert messages Reduced Turn Around Time: Expect loans to be processed with least turn around time Reduced loan processing fees: Partly due to market dynamics the customer expects the loan processing fee to be negligible Market Dynamics Competitive environment:  The competition keeps creating many variants of loan products to attract customers, the bank needs to create similar product variants with better offers to attract customers or keep existing ones Ability to migrate loans from one vendor to another: It has become really easy for retail customers to move from one bank to the other given the low fee of loan processing and highly attractive offers. How does the bank protect it's customer base while actively engaging with potential customers banking with competitor banks Flexibility to react to market developments: Market development greatly influence loan processing, underwriting, asset valuation, risk mitigation rules. Can the bank modify rules and policies, the idea is not just to react to market developments but to pro-actively manage new developments Economic conditions Constant change in various rates and their implications on the rates and rules applied when on-boarding a loan: How quickly can the bank apply changes to rates offered to customers when the central bank changes various rates Requirements of Audit by the central banker: Tough economic conditions have demanded much more stringent audit rules and tests. The banks needs to produce ready reports(historic & operational) for audit compliance Risk Mitigation: While risk mitigation has always been a key concern for the bank, this is the area where the bank's underwriters & risk analysts spend the maximum time when processing a loan application. In order to reduce TAT the bank cannot compromise on its risk mitigation strategies Cost pressures Reduce Cost of processing per application: To deliver a reduced loan processing fee to the customer, the bank needs to keep its cost per processing loan application low. Meet customer TAT expectations while reducing the queues and the systems being used to process the loan application: The loan application could potentially be spending a lot of time waiting in the queue for further processing. Different volumes & patterns of applications demand different queuing algorithms. The bank needs to have real-time visibility into these queues and have the flexibility to change queuing algorithms at runtime  Increase the use of electronic communication and reduce the branch channel usage: Lesser automation leads not only leads to Increased turn around time, it also impacts more costs to reach out to customers The objective of our PoC was to implement a Loan Origination Solution whose ownership lies with the bank and effectively meet the challenges listed above. We built a simple story board for the solution We then went about implementing our storyboard using Oracle BPM Suite, Webcenter Content : Imaging. The web UI has been built on ADF technolgies, while the integration with core-services has been implemented using the underlying SOA infrastructure. The BPM process model is quite exhaustive can meet all the challenges listed above to reasonable degree. A bank intending to implement an end-to-end Loan Origination Solution has multiple options at it's disposal. It can Develop a customer Loan Origination Application from scratch: Gives maximum opportunity to build what you want but inflexible to upgrade and maintain. Higher TCO in long term Buy a Packaged application & customize it: Customizing a generic loan application can be tedious and prove as difficult as above. Build it using many disparate & un-integrated tools: Initially seems easier than developing from scratch. But, without integrated tool sets this is not a viable approach either or A solution based on a Framework: Independent Services and Business Process Modeling provide decoupled architecture that is flexible. We built this framework end-to-end with processes the core process of loan origination & several sub-processes such as Analyse and define customer needs, customer credit verification, identity check processes, legal review process, New customer registration & risk assessment.

    Read the article

  • Subterranean IL: Generics and array covariance

    - by Simon Cooper
    Arrays in .NET are curious beasts. They are the only built-in collection types in the CLR, and SZ-arrays (single dimension, zero-indexed) have their own commands and IL syntax. One of their stranger properties is they have a kind of built-in covariance long before generic variance was added in .NET 4. However, this causes a subtle but important problem with generics. First of all, we need to briefly recap on array covariance. SZ-array covariance To demonstrate, I'll tweak the classes I introduced in my previous posts: public class IncrementableClass { public int Value; public virtual void Increment(int incrementBy) { Value += incrementBy; } } public class IncrementableClassx2 : IncrementableClass { public override void Increment(int incrementBy) { base.Increment(incrementBy); base.Increment(incrementBy); } } In the CLR, SZ-arrays of reference types are implicitly convertible to arrays of the element's supertypes, all the way up to object (note that this does not apply to value types). That is, an instance of IncrementableClassx2[] can be used wherever a IncrementableClass[] or object[] is required. When an SZ-array could be used in this fashion, a run-time type check is performed when you try to insert an object into the array to make sure you're not trying to insert an instance of IncrementableClass into an IncrementableClassx2[]. This check means that the following code will compile fine but will fail at run-time: IncrementableClass[] array = new IncrementableClassx2[1]; array[0] = new IncrementableClass(); // throws ArrayTypeMismatchException These checks are enforced by the various stelem* and ldelem* il instructions in such a way as to ensure you can't insert a IncrementableClass into a IncrementableClassx2[]. For the rest of this post, however, I'm going to concentrate on the ldelema instruction. ldelema This instruction pops the array index (int32) and array reference (O) off the stack, and pushes a pointer (&) to the corresponding array element. However, unlike the ldelem instruction, the instruction's type argument must match the run-time array type exactly. This is because, once you've got a managed pointer, you can use that pointer to both load and store values in that array element using the ldind* and stind* (load/store indirect) instructions. As the same pointer can be used for both input and output to the array, the type argument to ldelema must be invariant. At the time, this was a perfectly reasonable restriction, and maintained array type-safety within managed code. However, along came generics, and with it the constrained callvirt instruction. So, what happens when we combine array covariance and constrained callvirt? .method public static void CallIncrementArrayValue() { // IncrementableClassx2[] arr = new IncrementableClassx2[1] ldc.i4.1 newarr IncrementableClassx2 // arr[0] = new IncrementableClassx2(); dup newobj instance void IncrementableClassx2::.ctor() ldc.i4.0 stelem.ref // IncrementArrayValue<IncrementableClass>(arr, 0) // here, we're treating an IncrementableClassx2[] as IncrementableClass[] dup ldc.i4.0 call void IncrementArrayValue<class IncrementableClass>(!!0[],int32) // ... ret } .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } And the result: Unhandled Exception: System.ArrayTypeMismatchException: Attempted to access an element as a type incompatible with the array. at IncrementArrayValue[T](T[] arr, Int32 index) at CallIncrementArrayValue() Hmm. We're instantiating the generic method as IncrementArrayValue<IncrementableClass>, but passing in an IncrementableClassx2[], hence the ldelema instruction is failing as it's expecting an IncrementableClass[]. On features and feature conflicts What we've got here is a conflict between existing behaviour (ldelema ensuring type safety on covariant arrays) and new behaviour (managed pointers to object references used for every constrained callvirt on generic type instances). And, although this is an edge case, there is no general workaround. The generic method could be hidden behind several layers of assemblies, wrappers and interfaces that make it a requirement to use array covariance when calling the generic method. Furthermore, this will only fail at runtime, whereas compile-time safety is what generics were designed for! The solution is the readonly. prefix instruction. This modifies the ldelema instruction to ignore the exact type check for arrays of reference types, and so it lets us take the address of array elements using a covariant type to the actual run-time type of the array: .method public static void IncrementArrayValue<(IncrementableClass) T>( !!T[] arr, int32 index) { // arr[index].Increment(1) ldarg.0 ldarg.1 readonly. ldelema !!T ldc.i4.1 constrained. !!T callvirt instance void IIncrementable::Increment(int32) ret } But what about type safety? In return for ignoring the type check, the resulting controlled mutability pointer can only be used in the following situations: As the object parameter to ldfld, ldflda, stfld, call and constrained callvirt instructions As the pointer parameter to ldobj or ldind* As the source parameter to cpobj In other words, the only operations allowed are those that read from the pointer; stind* and similar that alter the pointer itself are banned. This ensures that the array element we're pointing to won't be changed to anything untoward, and so type safety within the array is maintained. This is a typical example of the maxim that whenever you add a feature to a program, you have to consider how that feature interacts with every single one of the existing features. Although an edge case, the readonly. prefix instruction ensures that generics and array covariance work together and that compile-time type safety is maintained. Tune in next time for a look at the .ctor generic type constraint, and what it means.

    Read the article

  • Working with packed dates in SSIS

    - by Jim Giercyk
    One of the challenges recently thrown my way was to read an EBCDIC flat file, decode packed dates, and insert the dates into a SQL table.  For those unfamiliar with packed data, it is a way to store data at the nibble level (half a byte), and was often used by mainframe programmers to conserve storage space.  In the case of my input file, the dates were 2 bytes long and  represented the number of days that have past since 01/01/1950.  My first thought was, in the words of Scooby, Hmmmmph?  But, I love a good challenge, so I dove in. Reading in the flat file was rather simple.  The only difference between reading an EBCDIC and an ASCII file is the Code Page option in the connection manager.  In my case, I needed to use Code Page 1140 for EBCDIC (I could have also used Code Page 37).       Once the code page is set correctly, SSIS can understand what it is reading and it will convert the output to the default code page, 1252.  However, packed data is either unreadable or produces non-alphabetic characters, as we can see in the preview window.   Column 1 is actually the packed date, columns 0 and 2 are the values in the rest of the file.  We are only interested in Column 1, which is a 2 byte field representing a packed date.  We know that 2 bytes of packed data can be stored in 1 byte of character data, so we are working with 4 packed digits in 2 character bytes.  If you are confused, stay tuned….this will make sense in a minute.   Right-click on your Flat File Source shape and select “Show Advanced Editor”. Here is where the magic begins. By changing the properties of the output columns, we can access the packed digits from each byte. By default, the Output Column data type is DT_STR. Since we want to look at the bytes individually and not the entire string, change the data type to DT_BYTES. Next, and most important, set UseBinaryFormat to TRUE. This will write the HEX VALUES of the output string instead of writing the character values.  Now we are getting somewhere! Next, you will need to use a Data Conversion shape in your Data Flow to transform the 2 position byte stream to a 4 position Unicode string containing the packed data.  You need the string to be 4 bytes long because it will contain the 4 packed digits.  Here is what that should look like in the Data Conversion shape: Direct the output of your data flow to a test table or file to see the results.  In my case, I created a test table.  The results looked like this:     Hold on a second!  That doesn't look like a date at all.  No, of course not.  It is a hex number which represents the days which have passed between 01/01/1950 and the date.  We have to convert the Hex value to a decimal value, and use the DATEADD function to get a date value.  Luckily, I have created a function to convert Hex to Decimal:   -- ============================================= -- Author:        Jim Giercyk -- Create date: March, 2012 -- Description:    Converts a Hex string to a decimal value -- ============================================= CREATE FUNCTION [dbo].[ftn_HexToDec] (     @hexValue NVARCHAR(6) ) RETURNS DECIMAL AS BEGIN     -- Declare the return variable here DECLARE @decValue DECIMAL IF @hexValue LIKE '0x%' SET @hexValue = SUBSTRING(@hexValue,3,4) DECLARE @decTab TABLE ( decPos1 VARCHAR(2), decPos2 VARCHAR(2), decPos3 VARCHAR(2), decPos4 VARCHAR(2) ) DECLARE @pos1 VARCHAR(1) = SUBSTRING(@hexValue,1,1) DECLARE @pos2 VARCHAR(1) = SUBSTRING(@hexValue,2,1) DECLARE @pos3 VARCHAR(1) = SUBSTRING(@hexValue,3,1) DECLARE @pos4 VARCHAR(1) = SUBSTRING(@hexValue,4,1) INSERT @decTab VALUES (CASE               WHEN @pos1 = 'A' THEN '10'                 WHEN @pos1 = 'B' THEN '11'               WHEN @pos1 = 'C' THEN '12'               WHEN @pos1 = 'D' THEN '13'               WHEN @pos1 = 'E' THEN '14'               WHEN @pos1 = 'F' THEN '15'               ELSE @pos1              END, CASE               WHEN @pos2 = 'A' THEN '10'                 WHEN @pos2 = 'B' THEN '11'               WHEN @pos2 = 'C' THEN '12'               WHEN @pos2 = 'D' THEN '13'               WHEN @pos2 = 'E' THEN '14'               WHEN @pos2 = 'F' THEN '15'               ELSE @pos2              END, CASE               WHEN @pos3 = 'A' THEN '10'                 WHEN @pos3 = 'B' THEN '11'               WHEN @pos3 = 'C' THEN '12'               WHEN @pos3 = 'D' THEN '13'               WHEN @pos3 = 'E' THEN '14'               WHEN @pos3 = 'F' THEN '15'               ELSE @pos3              END, CASE               WHEN @pos4 = 'A' THEN '10'                 WHEN @pos4 = 'B' THEN '11'               WHEN @pos4 = 'C' THEN '12'               WHEN @pos4 = 'D' THEN '13'               WHEN @pos4 = 'E' THEN '14'               WHEN @pos4 = 'F' THEN '15'               ELSE @pos4              END) SET @decValue = (CONVERT(INT,(SELECT decPos4 FROM @decTab)))         +                 (CONVERT(INT,(SELECT decPos3 FROM @decTab))*16)      +                 (CONVERT(INT,(SELECT decPos2 FROM @decTab))*(16*16)) +                 (CONVERT(INT,(SELECT decPos1 FROM @decTab))*(16*16*16))     RETURN @decValue END GO     Making use of the function, I found the decimal conversion, added that number of days to 01/01/1950 and FINALLY arrived at my “unpacked relative date”.  Here is the query I used to retrieve the formatted date, and the result set which was returned: SELECT [packedDate] AS 'Hex Value',        dbo.ftn_HexToDec([packedDate]) AS 'Decimal Value',        CONVERT(DATE,DATEADD(day,dbo.ftn_HexToDec([packedDate]),'01/01/1950'),101) AS 'Relative String Date'   FROM [dbo].[Output Table]         This technique can be used any time you need to retrieve the hex value of a character string in SSIS.  The date example may be a bit difficult to understand at first, but with SSIS becoming the preferred tool for enterprise level integration for many companies, there is no doubt that developers will encounter these types of requirements with regularity in the future. Please feel free to contact me if you have any questions.

    Read the article

  • Caveat utilitor - Can I run two versions of Microsoft Project side-by-side?

    - by Martin Hinshelwood
    A number of out customers have asked if there are any problems in installing and running multiple versions of Microsoft Project on a single client. Although this is a case of Caveat utilitor (Let the user beware), as long as the user understands and accepts the issues that can occur then they can do this. Although Microsoft provide the ability to leave old versions of Office products (except Outlook) on your client when you are installing a new version of the product they certainly do not endorse doing so. Figure: For Project you can choose to keep the old stuff   That being the case I would have preferred that they put a “(NOT RECOMMENDED)” after the options to impart that knowledge to the rest of us, but they did not. The default and recommended behaviour is for the newer version installer to remove the older versions. Of course this does not apply in the revers. There are no forward compatibility packs for Office. There are a number of negative behaviours (or bugs) that can occur in this configuration: There is only one MS Project In Windows a file extension can only be associated with a single program.  In this case, MPP files can be associated with only one version of winproj.exe.  The executables are in different folders so if a user double-clicks a Project file on the desktop, file explorer, or Outlook email, Windows will launch the winproj.exe associated with MPP and then load the MPP file.  There are problems associated with this situation and in some cases workarounds. The user double-clicks on a Project 2010 file, Project 2007 launches but is unable to open the file because it is a newer version.  The workaround is for the user to launch Project 2010 from the Start menu then open the file.  If the file is attached to an email they will need to first drag the file to the desktop. All your linked MS Project files need to be of the same version There are a number of problems that occur when people use on Microsoft’s Object Linking and Embedding (OLE) technology.  The three common uses of OLE are: for inserted projects where a Master project contains sub-projects and each sub-project resides in its own MPP file shared resource pools where multiple MPP files share a common resource pool kept in a single MPP file cross-project links where a task or milestone in one MPP file has a  predecessor/successor relationship with a task or milestone in a different MPP file What I’ve seen happen before is that if you are running in a version of Project that is not associated with the MPP extension and then try and activate an OLE link then Project tries to launch the other version of Project.  Things start getting very confused since different MPP files are being controlled by different versions of Project running at the same time.  I haven’t tried this in awhile so I can’t give you exact symptoms but I suspect that if Project 2010 is involved the symptoms will be different then in a Project 2003/2007 scenario.  I’ve noticed that Project 2010 gives different error messages for the exact same problem when it occurs in Project 2003 or 2007.  -Anonymous The recommendation would be either not to use this feature if you have to have multiple versions of Project installed or to use only a single version of Project. You may get unexpected negative behaviours if you are using shared resource pools or resource pools even when you are not running multiple versions as I have found that they can get broken very easily. If you need these thing then it is probably best to use Project Server as it was created to solve many of these specific issues. Note: I would not even allow multiple people to access a network copy of a Project file because of the way Windows locks files in write mode. This can cause write-locks that get so bad a server restart is required I’ve seen user’s files get write-locked to the point where the only resolution is to reboot the server. Changing the default version to run for an extension So what if you want to change the default association from Project 2007 to Project 2010?   Figure: “Control Panel | Folder Options | Change the file associated with a file extension” Windows normally only lists the last version installed for a particular extension. You can select a specific version by selecting the program you want to change and clicking “Change program… | Browse…” and then selecting the .exe you want to use on the file system. Figure: You will need to select the exact version of “winproj.exe” that you want to run Conclusion Although it is possible to run multiple versions of Project on one system in the main it does not really make sense.

    Read the article

  • JavaOne Latin America Opening Keynotes

    - by Tori Wieldt
    Originally published on blogs.oracle.com/javaone It was a great first day at JavaOne Brazil, which included the Java Strategy and Java Technical keynotes. Henrik Stahl, Senior Director, Product Management for Java opened the keynotes by saying that this is the third year for JavaOne Latin America. He explained, "You know what they say, the first time doesn't count, the second time is a habit and the third time it's a tradition!" He mentioned that he was thrilled that this is largest JavaOne in Brazil to date, and he wants next year to be larger. He said that Oracle knows Latin America is an important hub for development.  "We continually come back to Latin America because of the dedication the community has with driving the continued innovation for Java," he said. Stahl explained that Oracle and the Java community must continue to innovate and Make the Future Java together. The success of Java depends on three important factors: technological innovation, Oracle as a strong steward of Java, and community participation. "The Latin American Java Community (especially in Brazil) is a shining example of how to be positive contributor to Java," Stahl said. Next, George Saab, VP software dev, Java Platform Group at Oracle, discussed some of the recent and upcoming changes to Java. "In addition to the incremental improvements to Java 7, we have also increased the set of platforms supported by Oracle from Linux, Windows, and Solaris to now also include Mac OS X and Linux/ARM for ARM-based PCs such as the Raspberry Pi and emerging ARM based microservers."  Saab announced that EA builds for Linux ARM Hard Float ABI will be available by the end of the year.  Staffan Friberg, Product Manager, Java Platform Group, provided an overview of some of the language coming in Java 8, including Lambda, remove of PermGen, improved data and time APIs and improved security, Java 8 development is moving along. He reminded the audience that they can go to OpenJDK to see this development being done in real-time, and that there are weekly early access builds of OracleJDK 8 that developers can download and try today. Judson Althoff, Senior Vice President, Worldwide Alliances and Channels and Embedded Sales, was invited to the stage, and the audience was told that "even though he is wearing a suit, he is still pretty technical." Althoff started off with a bang: "The Internet of Things is on a collision course with big data and this is a huge opportunity for developers."  For example, Althoff said, today cars are more a data device than a mechanical device. A car embedded with sensors for fuel efficiency, temperature, tire pressure, etc. can generate a petabyte of data A DAY. There are similar examples in healthcare (patient monitoring and privacy requirements creates a complex data problem) and transportation management (sending a package around the world with sensors for humidity, temperature and light). Althoff then brought on stage representatives from three companies that are successful with Java today, first Axel Hansmann, VP Strategy & Marketing Communications, Cinterion. Mr. Hansmann explained that Cinterion, a market leader in Latin America, enables M2M services with Java. At JavaOne San Francisco, Cinterion launched the EHS5, the smallest 3g solderable module, with Java installed on it. This provides Original Equipment Manufacturers (OEMs) with a cost effective, flexible platform for bringing advanced M2M technology to market.Next, Steve Nelson, Director of Marketing for the Americas, at Freescale explained that Freescale is #1 in Embedded Processors in Wired and Wireless Communications, and #1 in Automotive Semiconductors in the Americas. He said that Java provides a mature, proven platform that is uniquely suited to meet the requirements of almost any type of embedded device. He encouraged University students to get involved in the Freescale Cup, a global competition where student teams build, program, and race a model car around a track for speed.Roberto Franco, SBTVD Forum President, SBTVD, talked about Ginga, a Java-based standard for television in Brazil. He said there are 4 million Ginga TV sets in Brazil, and they expect over 20 million TV sets to be sold by the end of 2014. Ginga is also being adopted in other 11 countries in Latin America. Ginga brings interactive services not only at TV set, but also on other devices such as tablets,  PCs or smartphones, as the main or second screen. "Interactive services is already a reality," he said, ' but in a near future, we foresee interactivity enhanced TV content, convergence with OTT services and a big participation from the audience,  all integrated on TV, tablets, smartphones and second screen devices."Before he left the stage, Nandini Ramani thanked Judson for being part of the Java community and invited him to the next Geek Bike Ride in Brazil. She presented him an official geek bike ride jersey.For the Technical Keynote, a "blue screen of death" appeared. With mock concern, Stephin Chin asked the rest of the presenters if they could go on without slides. What followed was a interesting collection of demos, including JavaFX on a tablet, a look at Project Easel in NetBeans, and even Simon Ritter controlling legos with his brainwaves! Stay tuned for more dispatches.

    Read the article

  • JavaOne Latin America Opening Keynotes

    - by Tori Wieldt
    It was a great first day at JavaOne Brazil, which included the Java Strategy and Java Technical keynotes. Henrik Stahl, Senior Director, Product Management for Java opened the keynotes by saying that this is the third year for JavaOne Latin America. He explained, "You know what they say, the first time doesn't count, the second time is a habit and the third time it's a tradition!" He mentioned that he was thrilled that this is largest JavaOne in Brazil to date, and he wants next year to be larger. He said that Oracle knows Latin America is an important hub for development.  "We continually come back to Latin America because of the dedication the community has with driving the continued innovation for Java," he said. Stahl explained that Oracle and the Java community must continue to innovate and Make the Future Java together. The success of Java depends on three important factors: technological innovation, Oracle as a strong steward of Java, and community participation. "The Latin American Java Community (especially in Brazil) is a shining example of how to be positive contributor to Java," Stahl said. Next, George Saab, VP software dev, Java Platform Group at Oracle, discussed some of the recent and upcoming changes to Java. "In addition to the incremental improvements to Java 7, we have also increased the set of platforms supported by Oracle from Linux, Windows, and Solaris to now also include Mac OS X and Linux/ARM for ARM-based PCs such as the Raspberry Pi and emerging ARM based microservers."  Saab announced that EA builds for Linux ARM Hard Float ABI will be available by the end of the year.  Staffan Friberg, Product Manager, Java Platform Group, provided an overview of some of the language coming in Java 8, including Lambda, remove of PermGen, improved data and time APIs and improved security, Java 8 development is moving along. He reminded the audience that they can go to OpenJDK to see this development being done in real-time, and that there are weekly early access builds of OracleJDK 8 that developers can download and try today. Judson Althoff, Senior Vice President, Worldwide Alliances and Channels and Embedded Sales, was invited to the stage, and the audience was told that "even though he is wearing a suit, he is still pretty technical." Althoff started off with a bang: "The Internet of Things is on a collision course with big data and this is a huge opportunity for developers."  For example, Althoff said, today cars are more a data device than a mechanical device. A car embedded with sensors for fuel efficiency, temperature, tire pressure, etc. can generate a petabyte of data A DAY. There are similar examples in healthcare (patient monitoring and privacy requirements creates a complex data problem) and transportation management (sending a package around the world with sensors for humidity, temperature and light). Althoff then brought on stage representatives from three companies that are successful with Java today, first Axel Hansmann, VP Strategy & Marketing Communications, Cinterion. Mr. Hansmann explained that Cinterion, a market leader in Latin America, enables M2M services with Java. At JavaOne San Francisco, Cinterion launched the EHS5, the smallest 3g solderable module, with Java installed on it. This provides Original Equipment Manufacturers (OEMs) with a cost effective, flexible platform for bringing advanced M2M technology to market.Next, Steve Nelson, Director of Marketing for the Americas, at Freescale explained that Freescale is #1 in Embedded Processors in Wired and Wireless Communications, and #1 in Automotive Semiconductors in the Americas. He said that Java provides a mature, proven platform that is uniquely suited to meet the requirements of almost any type of embedded device. He encouraged University students to get involved in the Freescale Cup, a global competition where student teams build, program, and race a model car around a track for speed.Roberto Franco, SBTVD Forum President, SBTVD, talked about Ginga, a Java-based standard for television in Brazil. He said there are 4 million Ginga TV sets in Brazil, and they expect over 20 million TV sets to be sold by the end of 2014. Ginga is also being adopted in other 11 countries in Latin America. Ginga brings interactive services not only at TV set, but also on other devices such as tablets,  PCs or smartphones, as the main or second screen. "Interactive services is already a reality," he said, ' but in a near future, we foresee interactivity enhanced TV content, convergence with OTT services and a big participation from the audience,  all integrated on TV, tablets, smartphones and second screen devices."Before he left the stage, Nandini Ramani thanked Judson for being part of the Java community and invited him to the next Geek Bike Ride in Brazil. She presented him an official geek bike ride jersey.For the Technical Keynote, a "blue screen of death" appeared. With mock concern, Stephin Chin asked the rest of the presenters if they could go on without slides. What followed was a interesting collection of demos, including JavaFX on a tablet, a look at Project Easel in NetBeans, and even Simon Ritter controlling legos with his brainwaves! Stay tuned for more dispatches.

    Read the article

  • Updating a database connection password using a script

    - by Tim Dexter
    An interesting customer requirement that I thought was worthy of sharing today. Thanks to James for the requirement and Bryan for the proposed solution and me for testing the solution and proving it works :0) A customers implementation of Sarbanes Oxley requires them to change all database account passwords every 90 days. This is scripted leveraging shell scripts today for most of their environments. But how can they manage the BI Publisher connections? Now, the customer is running 11g and therefore using weblogic on the middle tier, which is the first clue to Bryans proposed solution. To paraphrase and embellish Bryan's solution a little; why not use a JNDI connection from BIP to the database. Then employ the web logic scripting engine to make updates to the JNDI as needed? BIP is completely uninvolved and with a little 'timing' users will be completely unaware of the password updates i.e. change the password when reports are not being executed. Perfect! James immediately tracked down the WLST script that could be used here, http://middlewaremagic.com/weblogic/?p=4261 (thanks Ravish) Now it was just a case of testing the theory. Some steps: Create the JNDI connection in WLS Create the JNDI connection in BI Publisher pointing to the WLS connection Build new data models using or re-point data sources to use the JNDI connection. Create the WLST script to update the WLS JNDI password as needed. Test! Some details. Creating the JNDI connection in web logic is pretty straightforward. Log into hte console and look for Data Sources under the Services section of the home page and click it Click New >> Generic Datasource Give the connection a name. For the JNDI name, prefix it with 'jdbc/' so I have 'jdbc/localdb' - this name is important you'll need it on the BIP side. Select your db type - this will influence the drivers and information needed on the next page. Being a company man, Im using an Oracle db. Click Next Select the driver of choice, theres lots I know, you can read about them I just chose 'Oracle's Driver (Thin) for Instance connections; Versions 9.0.1 and later' Click Next >> Next Fill out the db name (SID), server, port, username to connect and password >> Next Test the config to ensure you can connect. >> Next Now you need to deploy the connection to your BI server, select it and click Next. You're done with the JNDI config. Creating the JNDI connection on the Publisher side is covered here. Just remember to the connection name you created in WLS e.g. 'jdbc/localdb' Not gonna tell you how to do this, go read the user guide :0) Suffice to say, it works. This requires a little reading around the subject to understand the scripting engine and how to execute scripts. Nicely covered here. However a bit of googlin' and I found an even easier way of running the script. ${ServerHome}/common/bin/wlst.sh updatepwd.py Where updatepwd.py is my script file, it can be in another directory. As part of the wlst.sh script your environment is set up for you so its very simple to execute. The nitty gritty: Need to take Ravish's script above and create a file with a .py extension. Its going to need some modification, as he explains on the web page, to make it work in your environment. I played around with it for a while but kept running into errors. The script as is, tries to loop through all of your connections and modify the user and passwords for each. Not quite what we are looking for. Remember our requirement is to just update the password for a given connection. I also found another issue with the script. WLS 10.x does not allow updates to passwords using clear type ie un-encrypted text while the server is in production mode. Its a bit much to set it back to developer mode bounce it, change the passwords and then bounce and then change back to production and bounce again. After lots of messing about I finally came up with the following: ############################################################################# # # Update password for JNDI connections # ############################################################################# print("*** Trying to Connect.... *****") connect('weblogic','welcome1','t3://localhost:7001') print("*** Connected *****") edit() startEdit() print ("*** Encrypt the password ***") en = encrypt('hr') print "Encrypted pwd: ", en print ("*** Changing pwd for LocalDB ***") dsName = 'LocalDB' print 'Changing Password for DataSource ', dsName cd('/JDBCSystemResources/'+dsName+'/JDBCResource/'+dsName+'/JDBCDriverParams/'+dsName) set('PasswordEncrypted',en) save() activate() Its pretty simple and you can expand on it to loop through the data sources and change each as needed. I have hardcoded the password into the file but you can pass it as a parameter as needed using the properties file method. Im not going to get into the detail of that here but its covered with an example here. Couple of points to note: 1. The change to the password requires a server bounce to get the changes picked up. You can add that to the shell script you will use to call the script above. 2. The script above needs to be run from the MW_HOME\user_projects\domains\bifoundation_domain directory to get the encryption libraries set correctly. My command to run the whole script was: d:\oracle\bi_mw\wlserver_10.3\common\bin\wlst.cmd updatepwd.py - where wlst.cmd is the scripting command line and updatepwd.py was my update password script above. I have not quite spoon fed everything you need to make it a robust script but at least you know you can do it and you can work out the rest I think :0)

    Read the article

  • CD/DVD drive not mounted when inserted with Disc of any kind

    - by Cisco Sán
    I just noticed that if a insert a CD or a DVD of any kind, the Drive will start spinning but it will not show the mounted disc. Before it used to ask me what to do with the media inserted. Now it doesn't even do that. I ran in the terminal this code: eject -n and it displays this: " eject: device is `/dev/sr0'" what can I do to get the functionality back on my drive. also ran this command: sudo mount -o ro,unhide,uid=1000 /dev/cdrom /mnt/cdrom but in return i get this: " mount: mount point /mnt/cdrom does not exist" Running Ubuntu 11.10 HERE IS THE HISTORY UNTIL NOW thanks Waltinator: I ran the 'dmesg' but don't know what I'm looking for. Im a newbie on this. The same thing with the 'ls -rlt /var/log' command. Should I create the directory for the mount? at this point really don't know what to do. – Cisco Sán 7 hours ago Here are 3 lines from my dmesg after I successfully inserted a CD: [ 4804.416018] wlan0: no IPv6 routers present [ 8214.125450] ISdit ISO 9660 Extensions: Microsoft Joliet Level 3 [ 8214.136556] ISO 9660 Extensions: RRIP_1991A The first line is a previous event, my wireless going online. The next 2 lines are a good result. The number in square brackets is "seconds since boot", the rest of the line is usually helpful. And no, you should NOT create the mount point. Let's try to get the automatic mounting to work. – waltinator 7 hours ago ok this are my last 3 lines on the 'dmesg' [ 18.130819] init: plymouth-stop pre-start process (1396) terminated with status 1 [ 28.780011] wlan0: no IPv6 routers present [ 505.632119] CE: hpet increased min_delta_ns to 20113 nsec – Cisco Sán 6 hours ago It looks like your CD/DVD drive is not connected to the data bus, and not causing an interrupt when you insert a platter. – waltinator 6 hours ago Try dmesg | grep -A8 CD-ROM which should show you what the system thought was available when it came up. – waltinator 6 hours ago here is my printout [0.774351] scsi 0:0:0:0: CD-ROM HL-DT-ST DVD+-RW GSA-T40N A100 PQ: 0 ANSI: 5 [0.778117] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray [0.778122] cdrom: Uniform CD-ROM driver Revision: 3.20 [0.778282] sr 0:0:0:0: Attached scsi CD-ROM sr0 [0.778340] sr 0:0:0:0: Attached scsi generic sg0 type 5 [0.780416] Freeing unused kernel memory: 984k freed [0.780732] Write protecting the kernel read-only data: 10240k [0.780986] Freeing unused kernel memory: 20k freed [0.786331] Freeing unused kernel memory: 1400k freed [0.804912] udevd[90]: starting version 173 [0.874178] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [0.874208] r8169 0000:02:00.0: PCI INT A - GSI 16 (level, low) - IRQ 16 OK, your system sees the drive. Can you open and close the tray with eject and eject -t? Run udevadm monitor while you insert a CD (type ^C when done) and see if you get "change" and "add" messages. – waltinator 6 hours ago ok, "eject" works perfectly "eject -t" does nothing. this is the message for "udevadm monitor": KERNEL[13771.009267] change /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0/block/sr0 (block) UDEV [13773.878887] change /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0 /block/sr0 (block) – Cisco Sán 6 hours ago sudo hwinfo --cdrom (the hwinfo package is installable through Software Center) describes my CD-ROM, try it. – waltinator 4 hours ago My read out from the "sudo hwinfo --cdrom" are the following: hal.1: read hal dataprocess 2753: arguments to dbus_move_error() were incorrect, assertion "(dest) == NULL || !dbus_error_is_set ((dest))" failed in file ../../dbus/dbus-errors.c line 280. This is normally a bug in some application using the D-Bus library. libhal.c 3483 : Error unsubscribing to signals, error=The name org.freedesktop.Hal was not provided by any .service files 22: SCSI 00.0: 10602 CD-ROM (DVD) [Created at block.247] Unique ID: KD9E.JgkxTS4hgl2 Parent ID: 3p2J.gdUMCD83e+E SysFS ID: /class/block/sr0 SysFS BusID: 0:0:0:0 SysFS Device Link: /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0 Hardware Class: cdrom Model: "HL-DT-ST DVD+-RW GSA-T40N" Vendor: "HL-DT-ST" Device: "DVD+-RW GSA-T40N" Revision: "A100" Driver: "ata_piix", "sr" Driver Modules: "ata_piix" Device File: /dev/sr0 (/dev/sg0) Device Files: /dev/sr0, /dev/scd0, /dev/disk/by-id/ata-HL-DT-ST_DVD+_-RW_GSA-T40N_K048BJ74257, /dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0, /dev/cdrom, /dev/cdrw, /dev/dvd, /dev/dvdrw Device Number: block 11:0 (char 21:0) Features: DVD Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #17 (IDE interface) Drive Speed: 31 Volume ID: "Movie" Publisher: "INTERVIDEO" Creation date: "20050424162207000" Thanks for the help. To Castro, hope this is what you meant and sorry for the comments..

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >