Search Results

Search found 5231 results on 210 pages for 'red eye'.

Page 124/210 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • The Windows Store... why did I sign up with this mess again?

    - by FransBouma
    Yesterday, Microsoft revealed that the Windows Store is now open to all developers in a wide range of countries and locations. For the people who think "wtf is the 'Windows Store'?", it's the central place where Windows 8 users will be able to find, download and purchase applications (or as we now have to say to not look like a computer illiterate: <accent style="Kentucky">aaaaappss</accent>) for Windows 8. As this is the store which is integrated into Windows 8, it's an interesting place for ISVs, as potential customers might very well look there first. This of course isn't true for all kinds of software, and developer tools in general aren't the kind of applications most users will download from the Windows store, but a presence there can't hurt. Now, this Windows Store hosts two kinds of applications: 'Metro-style' applications and 'Desktop' applications. The 'Metro-style' applications are applications created for the new 'Metro' UI which is present on Windows 8 desktop and Windows RT (the single color/big font fingerpaint-oriented UI). 'Desktop' applications are the applications we all run and use on Windows today. Our software are desktop applications. The Windows Store hosts all Metro-style applications locally in the store and handles the payment for these applications. This means you upload your application (sorry, 'app') to the store, jump through a lot of hoops, Microsoft verifies that your application is not violating a tremendous long list of rules and after everything is OK, it's published and hopefully you get customers and thus earn money. Money which Microsoft will pay you on a regular basis after customers buy your application. Desktop applications are not following this path however. Desktop applications aren't hosted by the Windows Store. Instead, the Windows Store more or less hosts a page with the application's information and where to get the goods. I.o.w.: it's nothing more than a product's Facebook page. Microsoft will simply redirect a visitor of the Windows Store to your website and the visitor will then use your site's system to purchase and download the application. This last bit of information is very important. So, this morning I started with fresh energy to register our company 'Solutions Design bv' at the Windows Store and our two applications, LLBLGen Pro and ORM Profiler. First I went to the Windows Store dashboard page. If you don't have an account, you have to log in or sign up if you don't have a live account. I signed in with my live account. After that, it greeted me with a page where I had to fill in a code which was mailed to me. My local mail server polls every several minutes for email so I had to kick it to get it immediately. I grabbed the code from the email and I was presented with a multi-step process to register myself as a company or as an individual. In red I was warned that this choice was permanent and not changeable. I chuckled: Microsoft apparently stores its data on paper, not in digital form. I chose 'company' and was presented with a lengthy form to fill out. On the form there were two strange remarks: Per company there can just be 1 (one, uno, not zero, not two or more) registered developer, and only that developer is able to upload stuff to the store. I have no idea how this works with large companies, oh the overhead nightmares... "Sorry, but John, our registered developer with the Windows Store is on holiday for 3 months, backpacking through Australia, no, he's not reachable at this point. M'yeah, sorry bud. Hey, did you fill in those TPS reports yesterday?" A separate Approver has to be specified, which has to be a different person than the registered developer. Apparently to Microsoft a company with just 1 person is not a company. Luckily we're with two people! *pfew*, dodged that one, otherwise I would be stuck forever: the choice I already made was not reversible! After I had filled out the form and it was all well and good and accepted by the Microsoft lackey who had to write it all down in some paper notebook ("Hey, be warned! It's a permanent choice! Written down in ink, can't be changed!"), I was presented with the question how I wanted to pay for all this. "Pay for what?" I wondered. Must be the paper they were scribbling the information on, I concluded. After all, there's a financial crisis going on! How could I forget! Silly me. "Ok fair enough". The price was 75 Euros, not the end of the world. I could only pay by credit card, so it was accepted quickly. Or so I thought. You see, Microsoft has a different idea about CC payments. In the normal world, you type in your CC number, some date, a name and a security code and that's it. But Microsoft wants to verify this even more. They want to make a verification purchase of a very small amount and are doing that with a special code in the description. You then have to type in that code in a special form in the Windows Store dashboard and after that you're verified. Of course they'll refund the small amount they pull from your card. Sounds simple, right? Well... no. The problem starts with the fact that I can't see the CC activity on some website: I have a bank issued CC card. I get the CC activity once a month on a piece of paper sent to me. The bank's online website doesn't show them. So it's possible I have to wait for this code till October 12th. One month. "So what, I'm not going to use it anyway, Desktop applications don't use the payment system", I thought. "Haha, you're so naive, dear developer!" Microsoft won't allow you to publish any applications till this verification is done. So no application publishing for a month. Wouldn't it be nice if things were, you know, digital, so things got done instantly? But of course, that lackey who scribbled everything in the Big Windows Store Registration Book isn't that quick. Can't blame him though. He's just doing his job. Now, after the payment was done, I was presented with a page which tells me Microsoft is going to use a third party company called 'Symantec', which will verify my identity again. The page explains to me that this could be done through email or phone and that they'll contact the Approver to verify my identity. "Phone?", I thought... that's a little drastic for a developer account to publish a single page of information about an external hosted software product, isn't it? On Facebook I just added a page, done. And paying you, Microsoft, took less information: you were happy to take my money before my identity was even 'verified' by this 3rd party's minions! "Double standards!", I roared. No-one cared. But it's the thought of getting it off your chest, you know. Luckily for me, everyone at Symantec was asleep when I was registering so they went for the fallback option in case phone calls were not possible: my Approver received an email. Imagine you have to explain the idiot web of security theater I was caught in to someone else who then has to reply a random person over the internet that I indeed was who I said I was. As she's a true sweetheart, she gave me the benefit of the doubt and assured that for now, I was who I said I was. Remember, this is for a desktop application, which is only a link to a website, some pictures and a piece of text. No file hosting, no payment processing, nothing, just a single page. Yeah, I also thought I was crazy. But we're not at the end of this quest yet. I clicked around in the confusing menus of the Windows Store dashboard and found the 'Desktop' section. I get a helpful screen with a warning in red that it can't find any certified 'apps'. True, I'm just getting started, buddy. I see a link: "Check the Windows apps you submitted for certification". Well, I haven't submitted anything, but let's see where it brings me. Oh the thrill of adventure! I click the link and I end up on this site: the hardware/desktop dashboard account registration. "Erm... but I just registered...", I mumbled to no-one in particular. Apparently for desktop registration / verification I have to register again, it tells me. But not only that, the desktop application has to be signed with a certificate. And not just some random el-cheapo certificate you can get at any mall's discount store. No, this certificate is special. It's precious. This certificate, the 'Microsoft Authenticode' Digital Certificate, is the only certificate that's acceptable, and jolly, it can be purchased from VeriSign for the price of only ... $99.-, but be quick, because this is a limited time offer! After that it's, I kid you not, $499.-. 500 dollars for a certificate to sign an executable. But, I do feel special, I got a special price. Only for me! I'm glowing. Not for long though. Here I started to wonder, what the benefit of it all was. I now again had to pay money for a shiny certificate which will add 'Solutions Design bv' to our installer as the publisher instead of 'unknown', while our customers download the file from our website. Not only that, but this was all about a Desktop application, which wasn't hosted by Microsoft. They only link to it. And make no mistake. These prices aren't single payments. Every year these have to be renewed. Like a membership of an exclusive club: you're special and privileged, but only if you cough up the dough. To give you an example how silly this all is: I added LLBLGen Pro and ORM Profiler to the Visual Studio Gallery some time ago. It's the same thing: it's a central place where one can find software which adds to / extends / works with Visual Studio. I could simply create the pages, add the information and they show up inside Visual Studio. No files are hosted at Microsoft, they're downloaded from our website. Exactly the same system. As I have to wait for the CC transcripts to arrive anyway, I can't proceed with publishing in this new shiny store. After the verification is complete I have to wait for verification of my software by Microsoft. Even Desktop applications need to be verified using a long list of rules which are mainly focused on Metro-style applications. Even while they're not hosted by Microsoft. I wonder what they'll find. "Your application wasn't approved. It violates rule 14 X sub D: it provides more value than our own competing framework". While I was writing this post, I tried to check something in the Windows Store Dashboard, to see whether I remembered it correctly. I was presented again with the question, after logging in with my live account, to enter the code that was just mailed to me. Not the previous code, a brand new one. Again I had to kick my mail server to pull the email to proceed. This was it. This 'experience' is so beyond miserable, I'm afraid I have to say goodbye for now to the 'Windows Store'. It's simply not worth my time. Now, about live accounts. You might know this: live accounts are tied to everything you do with Microsoft. So if you have an MSDN subscription, e.g. the one which costs over $5000.-, it's tied to this same live account. But the fun thing is, you can login with your live account to the MSDN subscriptions with just the account id and password. No additional code is mailed to you. While it gives you access to all Microsoft software available, including your licenses. Why the draconian security theater with this Windows Store, while all I want is to publish some desktop applications while on other Microsoft sites it's OK to simply sign in with your live account: no codes needed, no verification and no certificates? Microsoft, one thing you need with this store and that's: apps. Apps, apps, apps, apps, aaaaaaaaapps. Sorry, my bad, got carried away. I just can't stand the word 'app'. This store's shelves have to be filled to the brim with goods. But instead of being welcomed into the store with open arms, I have to fight an uphill battle with an endless list of rules and bullshit to earn the privilege to publish in this shiny store. As if I have to be thrilled to be one of the exclusive club called 'Windows Store Publishers'. As if Microsoft doesn't want it to succeed. Craig Stuntz sent me a link to an old blog post of his regarding code signing and uploading to Microsoft's old mobile store from back in the WinMo5 days: http://blogs.teamb.com/craigstuntz/2006/10/11/28357/. Good read and good background info about how little things changed over the years. I hope this helps Microsoft make things more clearer and smoother and also helps ISVs with their decision whether to go with the Windows Store scheme or ignore it. For now, I don't see the advantage of publishing there, especially not with the nonsense rules Microsoft cooked up. Perhaps it changes in the future, who knows.

    Read the article

  • SQLAuthority News – Android Efficiency Tips and Tricks – Personal Technology Tip #003

    - by pinaldave
    I use my phone for lots of things.  I use it mainly to replace my tablet – I can e-mail, take and edit photos, and do almost everything I can do on a laptop with this phone.  And I am sure that there are many of you out there just like me.  I personally have a Galaxy S3, which uses the Android operating system, and I have decided to feature it as the third installment of my Technology Tips and Tricks series. 1) Shortcut to your favorite contacts on home screen Access your most-called contacts easily from your home screen by holding your finger on any empty spot on the home screen.  A menu will pop up that allows you to choose Shortcuts, and Contact.  You can scroll through your contact list and then just tap on the name of the person you want to be able to dial with a single click. 2) Keep track of your data usage Yes, we all should keep a close eye on our data usage, because it is very easy to go over our limits and then end up with a giant bill at the end of the month.  Never get surprised when you open that mobile phone envelope again.  Go to Settings, then Data Usage, and you can find a quick rundown of your usage, how much data each app uses, and you can even set alarms to let you know when you are nearing the limits.   Better yet, you can set the phone to stop using data when it reaches a certain limit. 3) Bring back Good Grammar We often hear proclamations about the downfall of written language, and how texting abbreviations, misspellings, and lack of punctuation are the root of all evil.  Well, we can show all those doomsdayers that all is not lost by bringing punctuation back to texting.  Usually we leave it off when we text because it takes too long to get to the screen with all the punctuation options.  But now you can hold down the period (or “full stop”) button and a list of all the commonly-used punctuation marks will pop right up. 4) Apps, Apps, Apps and Apps And finally, I cannot end an article about smart phones without including a list of my favorite apps.  Here are a list of my Top 10 Applications on my Android (not counting social media apps). Advanced Task Killer – Keeps my phone snappy by closing un-necessary apps WhatsApp - my favorite alternate to Text SMS Flipboard - my ‘timepass’ moments Skype – keeps me close to friends and family GoogleMaps - I am never lost because of this one thing Amazon Kindle – Books my best friends DropBox - My data always safe Pluralsight Player – Learning never stops for me Samsung Kies Air – Connecting Phone to Computer Chrome – Replacing default browser I have not included any social media applications in the above list, but you can be sure that I am linked to Twitter, Facebook, Google+, LinkedIn, and YouTube. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Android, Personal Technology

    Read the article

  • Create excel files with GemBox.Spreadsheet .NET component

    - by hajan
    Generating excel files from .NET code is not always a very easy task, especially if you need to make some formatting or you want to do something very specific that requires extra coding. I’ve recently tried the GemBox Spreadsheet and I would like to share my experience with you. First of all, you can install GemBox Spreadsheet library from VS.NET 2010 Extension manager by searching in the gallery: Go in the Online Gallery tab (as in the picture bellow) and write GemBox in the Search box on top-right of the Extension Manager, so you will get the following result: Click Download on GemBox.Spreadsheet and you will be directed to product website. Click on the marked link then you will get to the following page where you have the component download link Once you download it, install the MSI file. Open the installation folder and find the Bin folder. There you have GemBox.Spreadsheet.dll in three folders each for different .NET Framework version. Now, lets move to Visual Studio.NET. 1. Create sample ASP.NET Web Application and give it a name. 2. Reference The GemBox.Spreadsheet.dll file in your project So you don’t need to search for the dll file in your disk but you can simply find it in the .NET tab in ‘Add Reference’ window and you have all three versions. I chose the version for 4.0.30319 runtime. Next, I will retrieve data from my Pubs database. I’m using Entity Framework. Here is the code (read the comments in it):             //get data from pubs database, tables: authors, titleauthor, titles             pubsEntities context = new pubsEntities();             var authorTitles = (from a in context.authors                                join tl in context.titleauthor on a.au_id equals tl.au_id                                join t in context.titles on tl.title_id equals t.title_id                                select new AuthorTitles                                {                                     Name = a.au_fname,                                     Surname = a.au_lname,                                     Title = t.title,                                     Price = t.price,                                     PubDate = t.pubdate                                }).ToList();             //using GemBox library now             ExcelFile myExcelFile = new ExcelFile();             ExcelWorksheet excWsheet = myExcelFile.Worksheets.Add("Hajan's worksheet");             excWsheet.Cells[0, 0].Value = "Pubs database Authors and Titles";             excWsheet.Cells[0, 0].Style.Borders.SetBorders(MultipleBorders.Bottom,System.Drawing.Color.Red,LineStyle.Thin);             excWsheet.Cells[0, 1].Style.Borders.SetBorders(MultipleBorders.Bottom, System.Drawing.Color.Red, LineStyle.Thin);                                      int numberOfColumns = 5; //the number of properties in the authorTitles we have             //for each column             for (int c = 0; c < numberOfColumns; c++)             {                 excWsheet.Columns[c].Width = 25 * 256; //set the width to each column                             }             //header row cells             excWsheet.Rows[2].Cells[0].Value = "Name";             excWsheet.Rows[2].Cells[1].Value = "Surname";             excWsheet.Rows[2].Cells[2].Value = "Title";             excWsheet.Rows[2].Cells[3].Value = "Price";             excWsheet.Rows[2].Cells[4].Value = "PubDate";             //bind authorTitles in the excel worksheet             int currentRow = 3;             foreach (AuthorTitles at in authorTitles)             {                 excWsheet.Rows[currentRow].Cells[0].Value = at.Name;                 excWsheet.Rows[currentRow].Cells[1].Value = at.Surname;                 excWsheet.Rows[currentRow].Cells[2].Value = at.Title;                 excWsheet.Rows[currentRow].Cells[3].Value = at.Price;                 excWsheet.Rows[currentRow].Cells[4].Value = at.PubDate;                 currentRow++;             }             //stylizing my excel file look             CellStyle style = new CellStyle(myExcelFile);             style.HorizontalAlignment = HorizontalAlignmentStyle.Left;             style.VerticalAlignment = VerticalAlignmentStyle.Center;             style.Font.Color = System.Drawing.Color.DarkRed;             style.WrapText = true;             style.Borders.SetBorders(MultipleBorders.Top                 | MultipleBorders.Left | MultipleBorders.Right                 | MultipleBorders.Bottom, System.Drawing.Color.Black,                 LineStyle.Thin);                                 //pay attention on this, we set created style on the given (firstRow, firstColumn, lastRow, lastColumn)             //in my example:             //firstRow = 2; firstColumn = 0; lastRow = authorTitles.Count+1; lastColumn = numberOfColumns-1; variable             excWsheet.Cells.GetSubrangeAbsolute(3, 0, authorTitles.Count+2, numberOfColumns-1).Style = style;             //save my excel file             myExcelFile.SaveXls(Server.MapPath(".") + @"/myFile.xls"); The AuthorTitles class: public class AuthorTitles {     public string Name { get; set; }     public string Surname { get; set; }     public string Title { get; set; }     public decimal? Price { get; set; }     public DateTime PubDate { get; set; } } The excel file will be generated in the root of your ASP.NET Web Application. The result is: There is a lot more you can do with this library. A set of good examples you have in the GemBox.Spreadsheet Samples Explorer application which comes together with the installation and you can find it by default in Start –> All Programs –> GemBox Software –> GemBox.Spreadsheet Samples Explorer. Hope this was useful for you. Best Regards, Hajan

    Read the article

  • Observations From The Corner of a Starbucks

    - by Chris Williams
    I’ve spent the last 3 days sitting in a Starbucks for 4-8 hours at a time. As a result, I’ve observed a lot of interesting behavior and people (most of whom were uninteresting themselves.) One of the things I’ve noticed is that most people don’t sit down. They come in, get their drink and go. The ones that do sit down, stay much longer than it takes to consume their drink. The drink is just an incidental purchase. Certainly not the reason they are here. Most of the people who sit also have laptops. Probably around 75%. Only a few have kids (with them) but the ones that do, have very small kids. Toddlers or younger. Of all the “campers” only a small percentage are wearing headphone, presumably because A) external noise doesn’t bother them or B) they aren’t working on anything important. My buddy George falls into category A, but he grew up in a house full of people. Silence freaks him out far more than noise. My brother and I, on the other hand, were both only children and don’t handle noisy distractions well. He needs it quiet (like a tomb) and I need music. Go figure… I can listen to Britney Spears mixed with Apoptygma Berzerk and Anthrax and crank out 30 pages, but if your toddler is banging his spoon on the table, you’re getting a dirty look… unless I have music, then all is right with the world. Anyway, enough about me. Most of the people who come in as a group are smiling when they enter. Half as many are smiling when they leave. People who come in alone typically aren’t smiling at all. The average age, over the last three days seems to be early 30s… with a couple of senior citizens and teenagers at either end of the curve. The teenagers almost never stay. They have better stuff to do on a nice day. The senior citizens are split nearly evenly between campers and in&outs. Most of the non-solo campers have 1 person with a laptop, while the other reads the paper or a book. Some campers bring multiple laptops… but only really look at one of them. This Starbucks has a drive through. The line is almost never more than 2-3 cars long but apparently a lot of the in&out people would rather come in and stand in line behind (up to) 5 people. The music in here sucks. My musical tastes can best be described as eclectic to bad, but I can still get work done (see above.) I find the music in this particular Starbucks to be discordant and jarring. At this Starbucks, the coffee lingo is apparently something that is meant to occur between employees only. The nice lady at the counter can handle orders in plain English and translate them to Baristaspeak (Baristese?) quite efficiently. If you order in Baristaspeak however, she will look confused and repeat your order back to you in plain English to confirm you actually meant what you said. Then she will say it in Baristaspeak to the lady making your drink. Nobody in this Starbucks (other than the Baristas) makes eye-contact… at least not with me. Of course that may be indicative of a separate issue. ;)

    Read the article

  • What’s the use of code reuse?

    - by Tony Davis
    All great developers write reusable code, don’t they? Well, maybe, but as with all statements regarding what “great” developers do or don’t do, it’s probably an over-simplification. A novice programmer, in particular, will encounter in the literature a general assumption of the importance of code reusability. They spend time worrying about DRY (don’t repeat yourself), moving logic into specific “helper” modules that they can then reuse, agonizing about the minutiae of the class structure, inheritance and interface design that will promote easy reuse. Unfortunately, writing code specifically for reuse often leads to complicated object hierarchies and inheritance models that are anything but reusable. If, instead, one strives to write simple code units that are highly maintainable and perform a single function, in a concise, isolated fashion then the potential for reuse simply “drops out” as a natural by-product. Programmers, of course, care about these principles, about encapsulation and clean interfaces that don’t expose inner workings and allow easy pluggability. This is great when it helps with the maintenance and development of code but how often, in practice, do we actually reuse our code? Most DBAs and database developers are familiar with the practical reasons for the limited opportunities to reuse database code and its potential downsides. However, surely elsewhere in our code base, reuse happens often. After all, we can all name examples, such as date/time handling modules, which if we write with enough care we can plug in to many places. I spoke to a developer just yesterday who looked me in the eye and told me that in 30+ years as a developer (a successful one, I’d add), he’d never once reused his own code. As I sat blinking in disbelief, he explained that, of course, he always thought he would reuse it. He’d often agonized over its design, certain that he was creating code of great significance that he and other generations would reuse, with grateful tears misting their eyes. In fact, it never happened. He had in his head, most of the algorithms he needed and would simply write the code from scratch each time, refining the algorithms and tailoring the code to meet the specific requirements. It was, he said, simply quicker to do that than dig out the old code, check it, correct the mistakes, and adapt it. Is this a common experience, or just a strange anomaly? Viewed in a certain light, building code with a focus on reusability seems to hark to a past age where people built cars and music systems with the idea that someone else could and would replace and reuse the parts. Technology advances so rapidly that the next time you need the “same” code, it’s likely a new technique, or a whole new language, has emerged in the meantime, better equipped to tackle the task. Maybe we should be less fearful of the idea that we could write code well suited to the system requirements, but with little regard for reuse potential, and then rewrite a better version from scratch the next time.

    Read the article

  • SQLAuthority News – Top 5 Latest Microsoft Certifications of 2013 – Guest Post

    - by Pinal Dave
    With the IT job market getting more and more competent by the day, certifications are a must for anyone who wishes to get a strong foothold in the industry. Microsoft community comes up with regular updates and enhancements in its existing products to keep up with the rapidly evolving requirements of the ICT industry. We bring you a list of five latest Microsoft certifications that you must consider acquiring this year. MCSE: SharePoint Learn all about Windows Server 2012 and Microsoft SharePoint 2013, which brings an advanced set of features to the fore in this latest version. It introduces new capabilities for business intelligence, social media, branding, search, identity management, mobile device among other features. Enjoy a great user experience with sharing and collaboration in community forum, within a pixel-perfect SharePoint website. Data connectivity and business intelligence tools allow users to process and access data, analyze reports, share and collaborate with each other more conveniently. Microsoft Specialist: Microsoft Project 2013 The only project management system that works seamlessly with other applications and cloud solutions of Microsoft, MS Project 2013 offers more than what meets the eye.  It provides for easier management and monitoring of projects so that users can ensure timely delivery while improving the productivity significantly. So keep all your projects on track and collaborate with your team like never before with this enhanced release! This one’s a must for all project managers. MCSE Messaging Another one of Microsoft gems is its messaging environment which has also launched the latest release Microsoft Exchange Server 2013. Messaging administrators can take up this training and validate their expertise in Unified Messaging, Exchange Online, PowerShell and Virtualization strategies, through MCSE Messaging certification in Exchange Server. If you wish to enhance productivity and data security of your organization while being flexible and extremely efficient, this is the right certification for you. MCSE Communication An enterprise can function optimally on the strength of its information flow and communication systems. With Lync Server 2013, you can introduce a whole new world of unified communications which consists of audio/video conferencing, dial-in, Persistent Chat, instant chat, and EDGE services in your organization. Utilize IT to serve and support business objectives by mastering this UC technology with this latest MCSE Communication course on using Microsoft Lync Server 2013. MCSE: SQL Server 2012 BI Platform The decision making process is largely influenced by underlying enterprise information used by the management for business intelligence. Therefore, a robust business intelligence platform that anchors enterprise IT and transform it to operational efficiencies is the need of the hour. SQL Server 2012 BI Platform certification helps professionals implement, manage and maintain a BI database infrastructure effectively. IT professionals with BI skills are highly sought after these days. MCSD: Windows Store Apps A Microsoft Certified Solutions Developer certification in Windows Store Apps validates your potential in designing interactive apps. Learn The Essentials of Developing Windows Store Apps using HTML5 and JavaScript and establish yourself as an ace developer capable of creating fast and fluid Metro style apps for Windows 8 that are accessible on a variety of devices. You can also go ahead and Learn Essentials of Developing Windows Store Apps using C# mode if you’re already familiar and working with C# programming language. Hence the developers are free to choose their own favorite development stream which opens doors for them to get ready for the latest and exciting application development platform called Windows store apps. Software developers with these skills are in great demand in the industry today. In order to continue being competitive in your respective fields, it is imperative that IT personnel update their knowledge on a regular basis. Certifications are a means to achieve this goal. Not considered to be an optional pre-requisite anymore, major IT certifications such as these are now essential to stay afloat in a cut-throat industry where technologies change on a daily basis. This blog is written by Aruneet Anand of Koenig Solutions. Koenig Solutions does training for all of the above courses. For more information, visit the website. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Microsoft Certifications

    Read the article

  • The Evolution Of C#

    - by Paulo Morgado
    The first release of C# (C# 1.0) was all about building a new language for managed code that appealed, mostly, to C++ and Java programmers. The second release (C# 2.0) was mostly about adding what wasn’t time to built into the 1.0 release. The main feature for this release was Generics. The third release (C# 3.0) was all about reducing the impedance mismatch between general purpose programming languages and databases. To achieve this goal, several functional programming features were added to the language and LINQ was born. Going forward, new trends are showing up in the industry and modern programming languages need to be more: Declarative With imperative languages, although having the eye on the what, programs need to focus on the how. This leads to over specification of the solution to the problem in hand, making next to impossible to the execution engine to be smart about the execution of the program and optimize it to run it more efficiently (given the hardware available, for example). Declarative languages, on the other hand, focus only on the what and leave the how to the execution engine. LINQ made C# more declarative by using higher level constructs like orderby and group by that give the execution engine a much better chance of optimizing the execution (by parallelizing it, for example). Concurrent Concurrency is hard and needs to be thought about and it’s very hard to shoehorn it into a programming language. Parallel.For (from the parallel extensions) looks like a parallel for because enough expressiveness has been built into C# 3.0 to allow this without having to commit to specific language syntax. Dynamic There was been lots of debate on which ones are the better programming languages: static or dynamic. The fact is that both have good qualities and users of both types of languages want to have it all. All these trends require a paradigm switch. C# is, in many ways, already a multi-paradigm language. It’s still very object oriented (class oriented as some might say) but it can be argued that C# 3.0 has become a functional programming language because it has all the cornerstones of what a functional programming language needs. Moving forward, will have even more. Besides the influence of these trends, there was a decision of co-evolution of the C# and Visual Basic programming languages. Since its inception, there was been some effort to position C# and Visual Basic against each other and to try to explain what should be done with each language or what kind of programmers use one or the other. Each language should be chosen based on the past experience and familiarity of the developer/team/project/company and not by particular features. In the past, every time a feature was added to one language, the users of the other wanted that feature too. Going forward, when a feature is added to one language, the other will work hard to add the same feature. This doesn’t mean that XML literals will be added to C# (because almost the same can be achieved with LINQ To XML), but Visual Basic will have auto-implemented properties. Most of these features require or are built on top of features of the .NET Framework and, the focus for C# 4.0 was on dynamic programming. Not just dynamic types but being able to talk with anything that isn’t a .NET class. Also introduced in C# 4.0 is co-variance and contra-variance for generic interfaces and delegates. Stay tuned for more on the new C# 4.0 features.

    Read the article

  • F# WPF Form &ndash; the basics

    - by MarkPearl
    I was listening to Dot Net Rocks show #560 about F# and during the podcast Richard Campbell brought up a good point with regards to F# and a GUI. In essence what I understood his point to be was that until one could write an end to end application in F#, it would be a hard sell to developers to take it on. In part I agree with him, while I am beginning to really enjoy learning F#, I can’t but help feel that I would be a lot further into the language if I could do my Windows Forms like I do in C# or VB.NET for the simple reason that in “playing” applications I spend the majority of the time in the UI layer… So I have been keeping my eye out for some examples of creating a WPF form in a F# project and came across Tim’s F# Twitter Stream Sample, which had exactly this…. of course he actually had a bit more than a basic form… but it was enough for me to scrap the insides and glean what I needed. So today I am going to make just the very basic WPF form with all the goodness of a XAML window. Getting Started First thing we need to do is create a new solution with a blank F# application project – I have made mine called FSharpWPF. Once you have the project created you will need to change the project type from a Console Application to a Windows Application. You do this by right clicking on the project file and going to its properties… Once that is done you will need to add the appropriate references. You do this by right clicking on the References in the Solution Explorer and clicking “Add Reference'”. You should add the appropriate .Net references below for WPF & XAMl to work. Once these references are added you then need to add your XAML file to the project. You can do this by adding a new item to the project of type xml and simply changing the file extension from xml to xaml. Once the xaml file has been added to the project you will need to add valid window XAML. Example of a very basic xaml file is shown below… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF WPF Form" Height="350" Width="525"> <Grid> </Grid> </Window> Once your xaml file is done… you need to set the build action of the xaml file from “None” to “Resource” as depicted in the picture below. If you do not set this you will get an IOException error when running the completed project with a message along the lines of “Cannot locate resource ‘window.xaml’ You then need to tie everything up by putting the correct F# code in the Program.fs to load the xaml window. In the Program.fs put the following code… module Program open System open System.Collections.ObjectModel open System.IO open System.Windows open System.Windows.Controls open System.Windows.Markup [<STAThread>] [<EntryPoint>] let main(_) = let w = Application.LoadComponent(new System.Uri("/FSharpWPF;component/Window.xaml", System.UriKind.Relative)) :?> Window (new Application()).Run(w) Once all this is done you should be able to build and run your project. What you have done is created a WPF based window inside a FSharp project. It should look something like below…   Nothing to exciting, but sufficient to illustrate the very basic WPF form in F#. Hopefully in future posts I will build on this to expose button events etc.

    Read the article

  • SQLAuthority News – Android Efficiency Tips and Tricks – Personal Technology Tip

    - by pinaldave
    I use my phone for lots of things.  I use it mainly to replace my tablet – I can e-mail, take and edit photos, and do almost everything I can do on a laptop with this phone.  And I am sure that there are many of you out there just like me.  I personally have a Galaxy S3, which uses the Android operating system, and I have decided to feature it as the third installment of my Technology Tips and Tricks series. 1) Shortcut to your favorite contacts on home screen Access your most-called contacts easily from your home screen by holding your finger on any empty spot on the home screen.  A menu will pop up that allows you to choose Shortcuts, and Contact.  You can scroll through your contact list and then just tap on the name of the person you want to be able to dial with a single click. 2) Keep track of your data usage Yes, we all should keep a close eye on our data usage, because it is very easy to go over our limits and then end up with a giant bill at the end of the month.  Never get surprised when you open that mobile phone envelope again.  Go to Settings, then Data Usage, and you can find a quick rundown of your usage, how much data each app uses, and you can even set alarms to let you know when you are nearing the limits.   Better yet, you can set the phone to stop using data when it reaches a certain limit. 3) Bring back Good Grammar We often hear proclamations about the downfall of written language, and how texting abbreviations, misspellings, and lack of punctuation are the root of all evil.  Well, we can show all those doomsdayers that all is not lost by bringing punctuation back to texting.  Usually we leave it off when we text because it takes too long to get to the screen with all the punctuation options.  But now you can hold down the period (or “full stop”) button and a list of all the commonly-used punctuation marks will pop right up. 4) Apps, Apps, Apps and Apps And finally, I cannot end an article about smart phones without including a list of my favorite apps.  Here are a list of my Top 10 Applications on my Android (not counting social media apps). Advanced Task Killer – Keeps my phone snappy by closing un-necessary apps WhatsApp - my favorite alternate to Text SMS Flipboard - my ‘timepass’ moments Skype – keeps me close to friends and family GoogleMaps - I am never lost because of this one thing Amazon Kindle – Books my best friends DropBox - My data always safe Pluralsight Player – Learning never stops for me Samsung Kies Air – Connecting Phone to Computer Chrome – Replacing default browser I have not included any social media applications in the above list, but you can be sure that I am linked to Twitter, Facebook, Google+, LinkedIn, and YouTube. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Android, Personal Technology

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • Exception Handling Frequency/Log Detail

    - by Cyborgx37
    I am working on a fairly complex .NET application that interacts with another application. Many single-line statements are possible culprits for throwing an Exception and there is often nothing I can do to check the state before executing them to prevent these Exceptions. The question is, based on best practices and seasoned experience, how frequently should I lace my code with try/catch blocks? I've listed three examples below, but I'm open to any advice. I'm really hoping to get some pros/cons of various approaches. I can certainly come up with some of my own (greater log granularity for the O-C approach, better performance for the Monolithic approach), so I'm looking for experience over opinion. EDIT: I should add that this application is a batch program. The only "recovery" necessary in most cases is to log the error, clean up gracefully, and quit. So this could be seen to be as much a question of log granularity as exception handling. In my mind's eye I can imagine good reasons for both, so I'm looking for some general advice to help me find an appropriate balance. Monolitich Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } catch (Exception e) { Log(e); } finally { CleanUp(); } } public static void Step1(){ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } public static void Step2(){ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } public static void Step3(){ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } } Delegated Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Obsessive-Compulsive Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Other approaches welcomed and encouraged. Above are examples only.

    Read the article

  • Podcast Show Notes: Collaborate 10 Wrap-Up - Part 1

    - by Bob Rhubart
    OK, I know last week I promised you a program featuring Oracle ACE Directors Mike van Alst (IT-Eye) and Jordan Braunstein (TUSC) and The Definitive Guide to SOA: Oracle Service Bus author Jeff Davies. But things happen. In this case, what happened was Collaborate 10 in Las Vegas. Prior to the event I asked Oracle ACE Director and OAUG board member Floyd Teter to see if he could round up a couple of people at the event for an impromtu interview over Skype (I was here in Cleveland) to get their impressions of the event. Listen to Part 1 Floyd, armed with his brand new iPad, went above and beyond the call of duty. At the appointed hour, which turned out to be about hour after the close of Collaborate 10,  Floyd had gathered nine other people to join him in a meeting room somewhere in the Mandalay Bay Convention Center. Here’s the entire roster: Floyd Teter - Project Manager at Jet Propulsion Lab, OAUG Board Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Mark Rittman - EMEA Technical Director and Co-Founder, Rittman Mead,  ODTUG Board Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Chet Justice - OBI Consultant at BI Wizards Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Elke Phelps - Oracle Applications DBA at Humana, OAUG SIG Chair Blog | LinkedIn | Oracle Mix | Book | Oracle ACE Profile Paul Jackson - Oracle Applications DBA at Humana Blog | LinkedIn | Oracle Mix | Book Srini Chavali - Enterprise Database & Tools Leader at Cummins, Inc Blog | LinkedIn | Oracle Mix Dave Ferguson – President, Oracle Applications Users Group LinkedIn | OAUG Profile John King - Owner, King Training Resources Website | LinkedIn | Oracle Mix Gavyn Whyte - Project Portfolio Manager at iFactory Consulting Blog | Twitter | LinkedIn | Oracle Mix John Nicholson - Channels & Alliances at Greenlight Technologies Website | LinkedIn Big thanks to Floyd for assembling the panelists and handling the on-scene MC/hosting duties.  Listen to Part 1 On a technical note, this discussion was conducted over Skype, using Floyd’s iPad, placed in the middle of the table.  During the call the audio was fantastic – the iPad did a remarkable job. Sadly, the Technology Gods were not smiling on me that day. The audio set-up that I tested successfully before the call failed to deliver when we first connected – I could hear the folks in Vegas, but they couldn’t hear me. A frantic, last-minute adjustment appeared to have fixed that problem, and the audio in my headphones from both sides of the conversation was loud and clear.  It wasn’t until I listened to the playback that I realized that something was wrong. So the audio for Vegas side of the discussion has about the same fidelity as a cell phone. It’s listenable, but disappointing when compared to what it sounded like during the discussion. Still, this was a one shot deal, and the roster of panelists and the resulting conversation was too good and too much fun to scrap just because of an unfortunate technical glitch.   Part 2 of this Collaborate 10 Wrap-Up will run next week. After that, it’s back on track with the previously scheduled program. So stay tuned: RSS del.icio.us Tags: oracle,otn,collborate 10,c10,oracle ace program,archbeat,arch2arch,oaug,odtug,las vegas Technorati Tags: oracle,otn,collborate 10,c10,oracle ace program,archbeat,arch2arch,oaug,odtug,las vegas

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

  • The Oracle Retail Week Awards - most exciting awards yet?

    - by sarah.taylor(at)oracle.com
    Last night's annual Oracle Retail Week Awards saw the UK's top retailers come together to celebrate the very best of our industry over the last year.  The Grosvenor House Hotel on Park Lane in London was the setting for an exciting ceremony which this year marked several significant milestones in British - and global - retail.  Check out our videos about the event at our Oracle Retail YouTube channel, and see if you were snapped by our photographer on our Oracle Retail Facebook page. There were some extremely hot contests for many of this year's awards - and all very deserving winners.  The entries have demonstrated beyond doubt that retailers have striven to push their standards up yet again in all areas over the past year.  The judging panel includes some of the most prestigious names in the retail industry - to impress the panel enough to win an award is a substantial achievement.  This year the panel included the likes of Andy Clarke - Chief Executive of ASDA Group; Mark Newton Jones - CEO of Shop Direct Group; Richard Pennycook - the finance director at Morrisons; Rob Templeman - Chief Executive of Debenhams; and Stephen Sunnucks - the president of Gap Europe.  These are retail veterans  who have each helped to shape the British High Street over the last decade.  It was great to chat with many of them in the Oracle VIP area last night.  For me, last night's highlight was honouring both Sir Stuart Rose and Sir Terry Leahy for their contributions to the retail industry.  Both have set the standards in retailing over the last twenty years and taken their respective businesses from strength to strength, demonstrating that there is always a need for innovation even in larger businesses, and that a business has to adapt quickly to new technology in order to stay competitive.  Sir Terry Leahy's retirement this year marks the end of an era of global expansion for the Tesco group and a milestone in the progression of British retail.  Sir Terry has helped steer Tesco through nearly 20 years of change, with 14 years as Chief Executive.  During this time he led the drive for international expansion and an aggressive campaign to increase market share.  He has led the way for High Street retailers in adapting to the rise of internet retailing and nurtured a very successful home delivery service.  More recently he has pioneered the notion of cross-channel retailing with the introduction of Tesco apps for the iPhone and Android mobile phones allowing customers to scan barcodes of items to add to a shopping list which they can then either refer to in store or order for delivery.  John Lewis Partnership was a very deserving winner of The Oracle Retailer of the Year award for their overall dedication to excellent retailing practices.  The business was also named the American Express Marketing/Advertising Campaign of the Year award for their memorable 'Never Knowingly Undersold' advert series, which included a very successful viral video and radio campaign with Fyfe Dangerfield's cover of Billy Joel's 'She's Always a Woman' used for the adverts.  Store Design of the Year was another exciting category with Topshop taking the accolade for its flagship Oxford Street store in London, which combines boutique concession-style stalls with high fashion displays and exclusive collections from leading designers.  The store even has its own hairdressers and food hall, making it a truly all-inclusive fashion retail experience and a global landmark for any self-respecting international fashion shopper. Over the next few weeks we'll be exploring some of the winning entries in more detail here on the blog, so keep an eye out for some unique insights into how the winning retailers have made such remarkable achievements. 

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • How to create a Global Rule that stores a document’s folder path in a custom metadata field

    - by Nicolas Montoya
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} How to create a Global Rule that stores a document’s folder path in a custom metadata field Efficiency purists would argue that redundancy is not necessary. In real life, we are willing to pay a price for performance –i.e. to have information at our fingertips. We have run into customers opting to store a document folder path as a document metadata field. They have their reasons, half of the ECM community will agree with them, and the other half would raise an eye brow. In the end, they are getting creative to achieve their document management goals. The below steps outlines how to create a Global Rule that would store a document’s folder path in a custom metadata field: Create a Global Rule via Configuration Manager > Rules Tab > Add Then check “Is global rule with priority”. Then check “Use rule activation condition”. The go to “Edit” and check the actions for this Script Properties: Then click OK, and the following rule activation condition will appear: Then Goto to the Fields Tab and add a Rule Field: Select the target Custom Metadata Field and click Ok, then check the “Is derived field”, then “Edit”, then go to the Custom Tab in the Script Properties window and enter the below custom script: <$if #active.dCollectionPath$> <$dprDerivedValue=#active.dCollectionPath$> <$else$> <$dprDerivedValue=#active.xCollectionIDPath$> <$endif$> For more information on the dCollectionPath property, check Section 8.2 Folder Services from the Oracle® Fusion Middleware Services Reference Guide for Oracle Universal Content Management 11g Release 1 (11.1.1) http://docs.oracle.com/cd/E21043_01/doc.1111/e11011/c08_folders002.htm The above rule will keep the Custom Metadata Field updated with the Folder Path information when a document is checked in via the Content Server (CS) Web Interface or the Desktop Integration Suite (DIS).

    Read the article

  • Cutting Edge versus Just Average? Your SOA, Got BPM? by Mala Ramakrishnan

    - by JuergenKress
    Service Oriented Architecture (SOA) has completely transformed IT from the time it was introduced well over a decade ago. Organizations have been re-plumbing their infrastructure for reusability, efficiency and gain and succeeding with it. Best practices have emerged and people and technology have matured. We have got better at delivering on a stable platform on mission critical applications and services. Yet, there is this one secret that sets some SOA customers apart from the others. These companies grow and revolutionize their business and not just transform their IT infrastructure. The differences seem subtle for an untrained eye examining these organizations externally. And from within the company, it’s a bit like an ant sitting on an elephant, hard to differentiate between the IT trunk and business tail. What is it that some organizations do differently that makes them succeed beyond SOA? These organizations pull in business people more and more to weigh into their IT decisions. They wrench understanding process over services. They don’t settle easily when bridging business metrics and IT performance. They anguish over business requirements not translating seamlessly and quickly into IT. IT is not just an enabler but a pillar that revolutionizes their business. Okay, I’ll give it to you. These organizations layer Business Process Management (BPM) on top of their SOA. Think about lifeblood business processes in your own organizations. If you are Fedex, this would be shipping and handling. If you are Stanford Hospital, this would be patient case-management: from on-boarding through discharge and follow-up care. If you are Wells Fargo, this would be loan origination. Now think about how your SOA ties into your business process. Can you decouple your business processes from your SOA so that the two can transform and change independent of each other? Can you forecast success metrics for your business process, make the changes across the board and then look back over different periods of time to see if you are on track? Are your critical business processes entrenched in the minds of few experts in your organization or does everyone from the receptionist to your enterprise architect to your CEO understand what they can do to revolutionize it? Business Process Management is a superset of SOA. It is the process of getting your business to articulate business value and metrics and have it implemented in IT without any loss in translation. It is the act of extracting the business process from the minds of experts and IT applications in your organization and valuing them as assets for performance and gain. BPM is stepping outside your SOA and moving your organization to the next level of innovation. Oracle is accelerating BPM across industries with the latest launch. Join us to understand how BPM can give your organization a cutting edge over your SOA. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • C# XNA Handle mouse events?

    - by user406470
    I'm making a 2D game engine called Clixel over on GitHub. The problem I have relates to two classes, ClxMouse and ClxButton. In it I have a mouse class - the code for that can be viewed here. ClxMouse using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; namespace org.clixel { public class ClxMouse : ClxSprite { private MouseState _curmouse, _lastmouse; public int Sensitivity = 3; public bool Lock = true; public Vector2 Change { get { return new Vector2(_curmouse.X - _lastmouse.X, _curmouse.Y - _lastmouse.Y); } } private int _scrollwheel; public int ScrollWheel { get { return _scrollwheel; } } public bool LeftDown { get { if (_curmouse.LeftButton == ButtonState.Pressed) return true; else return false; } } public bool RightDown { get { if (_curmouse.RightButton == ButtonState.Pressed) return true; else return false; } } public bool MiddleDown { get { if (_curmouse.MiddleButton == ButtonState.Pressed) return true; else return false; } } public bool LeftPressed { get { if (_curmouse.LeftButton == ButtonState.Pressed && _lastmouse.LeftButton == ButtonState.Released) return true; else return false; } } public bool RightPressed { get { if (_curmouse.RightButton == ButtonState.Pressed && _lastmouse.RightButton == ButtonState.Released) return true; else return false; } } public bool MiddlePressed { get { if (_curmouse.MiddleButton == ButtonState.Pressed && _lastmouse.MiddleButton == ButtonState.Released) return true; else return false; } } public bool LeftReleased { get { if (_curmouse.LeftButton == ButtonState.Released && _lastmouse.LeftButton == ButtonState.Pressed) return true; else return false; } } public bool RightReleased { get { if (_curmouse.RightButton == ButtonState.Released && _lastmouse.RightButton == ButtonState.Pressed) return true; else return false; } } public bool MiddleReleased { get { if (_curmouse.MiddleButton == ButtonState.Released && _lastmouse.MiddleButton == ButtonState.Pressed) return true; else return false; } } public MouseState CurMouse { get { return _curmouse; } } public MouseState LastMouse { get { return _lastmouse; } } public ClxMouse() : base(ClxG.Textures.Default.Cursor) { _curmouse = Mouse.GetState(); _lastmouse = _curmouse; CollisionBox = new Rectangle(ClxG.Screen.Center.X, ClxG.Screen.Center.Y, Texture.Width, Texture.Height); this.Solid = false; DefaultPosition = new Vector2(CollisionBox.X, CollisionBox.Y); Mouse.SetPosition(CollisionBox.X, CollisionBox.Y); } public ClxMouse(Texture2D _texture) : base(_texture) { _curmouse = Mouse.GetState(); _lastmouse = _curmouse; CollisionBox = new Rectangle(ClxG.Screen.Center.X, ClxG.Screen.Center.Y, Texture.Width, Texture.Height); DefaultPosition = new Vector2(CollisionBox.X, CollisionBox.Y); } public override void Update() { _lastmouse = _curmouse; _curmouse = Mouse.GetState(); if (_curmouse != _lastmouse) { if (ClxG.Game.IsActive) { _scrollwheel = _curmouse.ScrollWheelValue; Velocity = new Vector2(Change.X / Sensitivity, Change.Y / Sensitivity); if (Lock) Mouse.SetPosition(ClxG.Screen.Center.X, ClxG.Screen.Center.Y); _curmouse = Mouse.GetState(); } base.Update(); } } public override void Draw(SpriteBatch _sb) { base.Draw(_sb); } } } ClxButton using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace org.clixel { public class ClxButton : ClxSprite { /// <summary> /// The color when the mouse is over the button /// </summary> public Color HoverColor; /// <summary> /// The color when the color is being clicked /// </summary> public Color ClickColor; /// <summary> /// The color when the button is inactive /// </summary> public Color InactiveColor; /// <summary> /// The color when the button is active /// </summary> public Color ActiveColor; /// <summary> /// The color after the button has been clicked. /// </summary> public Color ClickedColor; /// <summary> /// The text to be displayed on the button, set to "" if no text is needed. /// </summary> public string Text; /// <summary> /// The ClxText object to be displayed. /// </summary> public ClxText TextRender; /// <summary> /// The ClxState that should be ResetAndShow() when the button is clicked. /// </summary> public ClxState ClickState; /// <summary> /// Collision check to make sure onCollide() only runs once per frame, /// since only the mouse needs to be collision checked. /// </summary> private bool _runonce = false; /// <summary> /// Gets a value indicating whether this instance is colliding. /// </summary> /// <value> /// <c>true</c> if this instance is colliding; otherwise, <c>false</c>. /// </value> public bool IsColliding { get { return _runonce; } } /// <summary> /// Initializes a new instance of the <see cref="ClxButton"/> class. /// </summary> public ClxButton() : base(ClxG.Textures.Default.Button) { HoverColor = Color.Red; ClickColor = Color.Blue; InactiveColor = Color.Gray; ActiveColor = Color.White; ClickedColor = Color.Yellow; Text = Name + ID + " Unset!"; TextRender = new ClxText(); TextRender.Text = Text; TextRender.TextPadding = new Vector2(5, 5); ClickState = null; CollideObjects(ClxG.Mouse); } /// <summary> /// Initializes a new instance of the <see cref="ClxButton"/> class. /// </summary> /// <param name="_texture">The button texture.</param> public ClxButton(Texture2D _texture) : base(_texture) { HoverColor = Color.Red; ClickColor = Color.Blue; InactiveColor = Color.Gray; ActiveColor = Color.White; ClickedColor = Color.Yellow; Texture = _texture; Text = Name + ID; TextRender = new ClxText(); TextRender.Name = this.Name + ".TextRender"; TextRender.Text = Text; TextRender.TextPadding = new Vector2(5, 5); TextRender.Reset(); ClickState = null; CollideObjects(ClxG.Mouse); } /// <summary> /// Draws the debug information, run from ClxG.DrawDebug unless manual control is assumed. /// </summary> /// <param name="_sb">SpriteBatch used for drawing.</param> public override void DrawDebug(SpriteBatch _sb) { _runonce = false; TextRender.DrawDebug(_sb); _sb.Draw(Texture, ActualRectangle, new Rectangle(0, 0, Texture.Width, Texture.Height), DebugColor, Rotation, Origin, Flip, Layer); _sb.Draw(ClxG.Textures.Default.DebugBG, new Rectangle(ActualRectangle.X - DebugLineWidth, ActualRectangle.Y - DebugLineWidth, ActualRectangle.Width + DebugLineWidth * 2, ActualRectangle.Height + DebugLineWidth * 2), new Rectangle(0, 0, ClxG.Textures.Default.DebugBG.Width, ClxG.Textures.Default.DebugBG.Height), DebugOutline, Rotation, Origin, Flip, Layer - 0.1f); _sb.Draw(ClxG.Textures.Default.DebugBG, ActualRectangle, new Rectangle(0, 0, ClxG.Textures.Default.DebugBG.Width, ClxG.Textures.Default.DebugBG.Height), DebugBGColor, Rotation, Origin, Flip, Layer - 0.01f); } /// <summary> /// Draws using the SpriteBatch, run from ClxG.Draw unless manual control is assumed. /// </summary> /// <param name="_sb">SpriteBatch used for drawing.</param> public override void Draw(SpriteBatch _sb) { _runonce = false; TextRender.Draw(_sb); if (Visible) if (Debug) { DrawDebug(_sb); } else _sb.Draw(Texture, ActualRectangle, new Rectangle(0, 0, Texture.Width, Texture.Height), Color, Rotation, Origin, Flip, Layer); } /// <summary> /// Updates this instance. /// </summary> public override void Update() { if (this.Color != ActiveColor) this.Color = ActiveColor; TextRender.Layer = this.Layer + 0.03f; TextRender.Text = Text; TextRender.Scale = .5f; TextRender.Name = this.Name + ".TextRender"; TextRender.Origin = new Vector2(TextRender.CollisionBox.Center.X, TextRender.CollisionBox.Center.Y); TextRender.Center(this); TextRender.Update(); this.CollisionBox.Width = (int)(TextRender.CollisionBox.Width * TextRender.Scale) + (int)(TextRender.TextPadding.X * 2); this.CollisionBox.Height = (int)(TextRender.CollisionBox.Height * TextRender.Scale) + (int)(TextRender.TextPadding.Y * 2); base.Update(); } /// <summary> /// Collide event, takes the colliding object to call it's proper collision code. /// You'd want to use something like if(typeof(collider) == typeof(ClxObject) /// </summary> /// <param name="collider">The colliding object.</param> public override void onCollide(ClxObject collider) { if (!_runonce) { _runonce = true; UpdateEvents(); base.onCollide(collider); } } /// <summary> /// Updates the mouse based events. /// </summary> public void UpdateEvents() { onHover(); if (ClxG.Mouse.LeftReleased) { onLeftReleased(); return; } if (ClxG.Mouse.RightReleased) { onRightReleased(); return; } if (ClxG.Mouse.MiddleReleased) { onMiddleReleased(); return; } if (ClxG.Mouse.LeftPressed) { onLeftClicked(); return; } if (ClxG.Mouse.RightPressed) { onRightClicked(); return; } if (ClxG.Mouse.MiddlePressed) { onMiddleClicked(); return; } if (ClxG.Mouse.LeftDown) { onLeftClick(); return; } if (ClxG.Mouse.RightDown) { onRightClick(); return; } if (ClxG.Mouse.MiddleDown) { onMiddleClick(); return; } } /// <summary> /// Shows the state of the click. /// </summary> public void ShowClickState() { if (ClickState != null) { ClickState.ResetAndShow(); } } /// <summary> /// Hover event /// </summary> virtual public void onHover() { this.Color = HoverColor; } /// <summary> /// Left click event /// </summary> virtual public void onLeftClick() { this.Color = ClickColor; } /// <summary> /// Right click event /// </summary> virtual public void onRightClick() { } /// <summary> /// Middle click event /// </summary> virtual public void onMiddleClick() { } /// <summary> /// Left click event, called once per click /// </summary> virtual public void onLeftClicked() { ShowClickState(); } /// <summary> /// Right click event, called once per click /// </summary> virtual public void onRightClicked() { this.Reset(); } /// <summary> /// Middle click event, called once per click /// </summary> virtual public void onMiddleClicked() { } /// <summary> /// Ons the left released. /// </summary> virtual public void onLeftReleased() { this.Color = ClickedColor; } virtual public void onRightReleased() { } virtual public void onMiddleReleased() { } } } The issue I have is that I have all these have event styled methods, especially in ClxButton with all the onLeftClick, onRightClick, etc, etc. Is there a better way for me to handle these events to be a lot more easier for a programmer to use? I was looking at normal events on some other sites, (I'd post them but I need more rep.) and didn't really see a good way to implement delegate events into my framework. I'm not really sure how these events work, could someone possibly lay out how these events are processed for me? TL:DR * Is there a better way to handle events like this? * Are events a viable solution to this problem? Thanks in advance for any help.

    Read the article

  • A quick look at: sys.dm_os_buffer_descriptors

    - by Jonathan Allen
    SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb] GO SELECT COUNT(*) AS cached_pages_count , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY DB_NAME(database_id) , database_id ORDER BY cached_pages_count DESC; This gives you results which are quite useful, but if you add a new column with the code: …to convert the pages value to show a MB value then they become more relevant and meaningful. To see how your server reacts to queries, start up SSMS and connect to a test server and database – mine is called AdventureWorks2008. Make sure you start from a know position by running: -- Only run this on a test server otherwise your production server's-- performance may drop off a cliff and your phone will start ringing. DBCC DROPCLEANBUFFERS GO Now we can run a query that would normally turn a DBA’s hair white: USE [AdventureWorks2008] go SELECT * FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] …and then check our cache situation: A nice low figure – not! Almost 2000 pages of data in cache equating to approximately 15MB. Luckily these tables are quite narrow; if this had been on a table with more columns then this could be even more dramatic. So, let’s make our query more efficient. After resetting the cache with the DROPCLEANBUFFERS and FREEPROCCACHE code above, we’ll only select the columns we want and implement a WHERE predicate to limit the rows to a specific customer. SELECT [sod].[OrderQty] , [sod].[ProductID] , [soh].[OrderDate] , [soh].[CustomerID] FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] WHERE [soh].[CustomerID] = 29722 …and check our effect cache: Now that is more sympathetic to our server and the other systems sharing its resources. I can hear you asking: “What has this got to do with logging, Jonathan?” Well, a smart DBA will keep an eye on this metric on their servers so they know how their hardware is coping and be ready to investigate anomalies so that no ‘disruptive’ code starts to unsettle things. Capturing this information over a period of time can lead you to build a picture of how a database relies on the cache and how it interacts with other databases. This might allow you to decide on appropriate schedules for over night jobs or otherwise balance the work of your server. You could schedule this job to run with a SQL Agent job and store the data in your DBA’s database by creating a table with: IF OBJECT_ID('CachedPages') IS NOT NULL DROP TABLE CachedPages CREATE TABLE CachedPages ( cached_pages_count INT , MB INT , Database_Name VARCHAR(256) , CollectedOn DATETIME DEFAULT GETDATE() ) …and then filling it with: INSERT INTO [dbo].[CachedPages] ( [cached_pages_count] , [MB] , [Database_Name] ) SELECT COUNT(*) AS cached_pages_count , ( COUNT(*) * 8.0 ) / 1024 AS MB , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY database_id After this has been left logging your system metrics for a while you can easily see how your databases use the cache over time and may see some spikes that warrant your attention. This sort of logging can be applied to all sorts of server statistics so that you can gather information that will give you baseline data on how your servers are performing. This means that when you get a problem you can see what statistics are out of their normal range and target you efforts to resolve the issue more rapidly.

    Read the article

  • St. Louis ALT.NET

    - by Brian Schroer
    I’m a huge fan of the St. Louis .NET User Group and a regular attendee of their meetings, but always wished there was a local group that discussed more advanced .NET topics. (That’s not a criticism of the group - I appreciate that they want to server developers with a broad range of skill levels). That’s why I was thrilled when Nicholas Cloud started a St. Louis ALT.NET group in 2010. Here’s the “about us” statement from the group’s web site: The ALT.NET community is a loosely coupled, highly cohesive group of like-minded individuals who believe that the best developers do not align themselves with platforms and languages, but with principles and ideas. In 2007, David Laribee created the term "ALT.NET" to explain this "alternative" view of the Microsoft development universe--a view that challenged the "Microsoft-only" approach to software development. He distilled his thoughts into four key developer characteristics which form the basis of the ALT.NET philosophy: You're the type of developer who uses what works while keeping an eye out for a better way. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc. You're not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc. You know tools are great, but they only take you so far. It's the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.) The St. Louis ALT.NET meetup group is a place where .NET developers can learn, share, and critique approaches to software development on the .NET stack. We cater to the highest common denominator, not the lowest, and want to help all St. Louis .NET developers achieve a superior level of software craftsmanship. I don’t see a lot of ALT.NET talk in blogs these days. The movement was harmed early on by the negative attitudes of some of its early leaders, including jerk moves like the Entity Framework “vote of no confidence”, but I do see occasional mentions of local groups like the St. Louis one. I think ALT.NET has been successful at bringing some of its ideas into the .NET world, including heavily influencing ASP.NET MVC and raising the general level of software craftsmanship for developers working on the Microsoft stack. The ideas and ideals live on, they’re just not branded as “this is ALT.NET!” In the past 18 months, St. Louis ALT.NET meetups have discussed topics like: NHibernate F# and other functional languages AOP CoffeeScript “How Ruby Is Making Me a Stronger C# Developer” Using rake for builds CQRS .NET dynamic programming micro web frameworks – Nancy & Jessica Git ALT.NET doesn’t mean (to me, anyway) “alternatives to .NET”, but “alternatives for .NET”. We look at how things are done in Ruby and other languages/platforms, but always with the idea “What can I learn from this to take back to my “day job” with .NET?”. Meetings are held at 7PM on the fourth Wednesday of each month at the offices of Professional Employment Group. PEG is located at 999 Executive Parkway (Suite 100 – lower level) in Creve Coeur (South of Olive off of Mason Road - Here's a map). Food is not supplied (sorry if you’re a big fan of the Papa John’s Crust-Lovers’ Pizza that’s a staple of user group meetings), but attendees are encouraged to come early and bring/share beer, so that’s cool. Thanks to Nick for organizing, and to Professional Employment Group for lending their offices. Please visit the meetup site for more information.

    Read the article

  • Quick Quips on QR Codes

    - by Tim Dexter
    Yes, I'm an alliterating all-star; I missed my calling as a newspaper headline writer. I have recently received questions from several folks on support for QR codes. You know them they are everywhere you look, even here! How does Publisher handle QR codes then? In theory, exactly the same way we handle any other 2D barcode font. We need the font file, a mapping entry and an encoding class. With those three pieces we can embed QR codes into any output. To test the theory, I went off to IDAutomation, I have worked with them and many customers over the years and their fonts and encoders have worked great and have been very reliable. They kindly provide demo fonts which has made my life so much easier to be able to write posts like this. Their QR font and encoder is a little tough to find. I started here and then hit the Demo Now button. On the next page I hit the right hand Demo Now button. In the resulting zip file you'll need two files: AdditionalFonts.zip >> Automation2DFonts >> TrueType >> IDAutomation2D.ttf Java Class Encoder >> IDAutomation_JavaFontEncoder_QRCode.jar - the QRBarcodeExample.java is useful to see how to call the encoder. The font file needs to be installed into the windows/fonts directory, just copy and paste it in using file explorer and windows will install it for you. Remember, we are using the demo font here and you'll see if you get your phones decoder to looks a the font above there is a fixed string 'DEMO' at the beginning. You want that removed? Go buy the font from the IDAutomation folks. The Encoder Next you need to create your encoding wrapper class. Publisher does ship a class but its compiled and I do not recommend trying to modify it, you can just build your own. I have loaded up my class here. You do not need to be a java guru, its pretty straightforward. I'd recommend a java IDE like JDeveloper from a convenience point of view. I have annotated my class and added a main method to it so you can test your encoders from JDeveloper without having to deploy them first. You can load up the project form the zip file straight into JDeveloper.Next, take a look at IDAutomation's example java class and you'll see: QRCodeEncoder qre=new QRCodeEncoder();  String DataToEncode = "IDAutmation Inc.";  boolean ApplyTilde = false;  int EncodingMode = 0;  int Version = 0;  int ErrorCorrectionLevel = 0;  System.out.println( qre.FontEncode(DataToEncode, ApplyTilde, EncodingMode, Version, ErrorCorrectionLevel) ); You'll need to check what settings you need to set for the ApplyTilde, EncodingMode, Version and ErrorCorrectionLevel. They are covered in the user guide from IDAutomation here. If you do not want to hard code the values in the encoder then you can quite easily externalize them and read the values from a text file. I have not covered that scenario here, I'm going with IDAutomation's defaults and my phone app is reading the fonts no problem. Now you know how to call the encoder, you need to incorporate it into your encoder wrapper class. From my sample class:       Class[] clazz = new Class[] { "".getClass() };        ENCODERS.put("code128a",mUtility.getClass().getMethod("code128a", clazz));       ENCODERS.put("code128b",mUtility.getClass().getMethod("code128b", clazz));       ENCODERS.put("code128c",mUtility.getClass().getMethod("code128c", clazz));       ENCODERS.put("qrcode",mUtility.getClass().getMethod("qrcode", clazz)); I just added a new entry to register the encoder method 'qrcode' (in red). Then I created a new method inside the class to call the IDAutomation encoder. /** Call to IDAutomations QR Code encoder. Passing the data to encode      Returning the encoded string to the template for formatting **/ public static final String qrcode (String DataToEncode) {   QRCodeEncoder qre=new QRCodeEncoder();    boolean ApplyTilde = false;    int EncodingMode = 0;    int Version = 0;    int ErrorCorrectionLevel = 0; return qre.FontEncode(DataToEncode, ApplyTilde, EncodingMode, Version, ErrorCorrectionLevel); } Almost the exact same code in their sample class. The DataToEncode string is passed in rather than hardcoded of course. With the class done you can now compile it, but you need to ensure that the IDAutomation_JavaFontEncoder_QRCode.jar is in the classpath. In JDeveloper, open the project properties >> Libraries and Classpaths and then add the jar to the list. You'll need the publisher jars too. You can find those in the jlib directory in your Template Builder for Word directory.Note! In my class, I have used package oracle.psbi.barcode; As my package spec, yours will be different but you need to note it for later. Once you have it compiling without errors you will need to generate a jar file to keep it in. In JDeveloper highlight your project node >> New >> Deployment Profile >> JAR file. Once you have created the descriptor, just take the defaults. It will tell you where the jar is located. Go get it and then its time to copy it and the IDAutomation jar into the Template Builder for Word directory structure. Deploying the jars On your windows machine locate the jlib directory under the Template Builder for Word install directory. On my machine its here, F:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\jlib. Copy both of the jar files into the directory. The next step is to get the jars into the classpath for the Word plugin so that Publisher can find your wrapper class and it can then find the IDAutomation encoder. The most consistent way I have found so far, is to open up the RTF2PDF.jar in the same directory and make some mods. First make a backup of the jar file then open it using winzip or 7zip or similar and get into the META-INF directory. In there is a file, MANIFEST.MF. This contains the classpath for the plugin, open it in an editor and add the jars to the end of the classpath list. In mine I have: Manifest-Version: 1.0 Class-Path: ./activation.jar ./mail.jar ./xdochartstyles.jar ./bicmn.jar ./jewt4.jar ./share.jar ./bipres.jar ./xdoparser.jar ./xdocore.jar ./xmlparserv2.jar ./xmlparserv2-904.jar  ./i18nAPI_v3.jar ./versioninfo.jar ./barcodejar.jar ./IDAutomation_JavaFontEncoder_QRCode.jar Main-Class: RTF2PDF I have put in carriage returns above to make the Class-Path: entry more readable, make sure yours is all on one line. Be sure to use the ./ as a prefix to the jar name. Ensure the file is saved inside the jar file 7zip and winzip both have popups asking if you want to update the file in the jar file.Now you have the jars on the classpath, the Publisher plugin will be able to find our classes at run time. Referencing the Font The next step is to reference the font location so that the rendering engine can find it and embed a subset into the PDF output. Remember the other output formats rely on the font being present on the machine that is opening the document. The PDF is the only truly portable format. Inside the config directory under the Template Builder for Word install directory, mine is here, F:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\config. You'll find the file, 'xdo example.cfg'. Rename it to xdo.cfg and open it in a text editor. In the fonts section, create a new entry:       <font family="IDAutomation2D" style="normal" weight="normal">              <truetype path="C:\windows\fonts\IDAutomation2D.ttf" />       </font> Note, 'IDAutomation2D' (in red) is the same name as you can see when you open MSWord and look for the QRCode font. This must match exactly. When Publisher looks at the fonts in the RTF template at runtime it will see 'IDAutomation2D' it will then look at its font mapping entries to find where that font file resides on the disk. If the names do not match or the font is not present then the font will not get used and it will fall back on Helvetica. Building the Template Now you have the data encoder and the font in place and mapped; you can use it in the template. The two commands you will need to have present are: <?register-barcode-vendor:'ENCODER WRAPPER CLASS'; 'ENCODER NAME'?> for my encoder I have: <?register-barcode-vendor:'oracle.psbi.barcode.BarcodeUtil'; 'MyBarcodeEncoder'?> Notice the two parameters for the command. The first provides the package 'path' and class name (remember I said you need to remember that above.)The second is the name of the encoder, in my case 'MyBarcodeEncoder'. Check my full encoder class in the zip linked below to see where I named it. You can change it to something else, no problem.This command needs to be near the top of the template. The second command is the encoding command: <?format-barcode:DATAT_TO_ENCODE;'ENCODER_METHOD_NAME';'ENCODER_NAME'?> for my command I have <?format-barcode:DATATEXT;'qrcode';'MyBarcodeEncoder'?>DATATEXT is the XML element that contains the text to be encoded. If you want to hard code a piece of text just surround it with single quotes. qrcode is the name of my encoder method that calls the IDAutomation encoder. Remember this.MyBarcodeEncoder is the name of my encoder. Repetition? Yes but its needed again. Both of these commands are put inside their own form fields. Do not apply the QRCode font to the second field just yet. Lets make sure the encoder is working. Run you template with some data and you should get something like this for your encoded data: AHEEEHAPPJOPMOFADIPFJKDCLPAHEEEHA BNFFFNBPJGMDIDJPFOJGIGBLMPBNFFFNB APIBOHFJCFBNKHGGBMPFJFJLJBKGOMNII OANKPJFFLEPLDNPCLMNGNIJIHFDNLJFEH FPLFLHFHFILKFBLOIGMDFCFLGJGOPJJME CPIACDFJPBGDODOJCHALJOBPECKMOEDDF MFFNFNEPKKKCHAIHCHPCFFLDAHFHAGLMK APBBBPAPLDKNKJKKGIPDLKGMGHDDEPHLN HHHHHHHPHPHHPHPPHPPPPHHPHHPHPHPHP Grooovy huh? If you do not get the encoded text then go back and check that your jars are in the right spot and that you have the MANIFEST.MF file updated correctly. Once you do get the encoded text, highlight the field and apply the IDAutomation2D font to it. Then re-run the report and you will hopefully see the QR code in your output. If not, go back and check the xdo.cfg entry and make sure its in the right place and the font location is correct. That's it, you now have QR codes in Publisher outputs. Everything I have written above, has been tested with the 5.6.3, 10.1.3.4.2 codelines. I'll be testing the 11g code in the next day or two and will update you with any changes. One thing I have not covered yet and will do in the next few days is how to deploy all of this to your server. Look out for a follow up post. One note on the apparent white lines in the font (see the image above). Once printed they disappear and even viewing the code on a screen with the white lines, my phone app is still able to read and interpret the contents no problem. I have zipped up my encoder wrapper class as a JDeveloper 11.1.1.6 project here. Just dig into the src directories to find the BarcodeUtil.java file if you just want the code. I have put comments into the file to hopefully help the novice java programmer out. Happy QR'ing!

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • ORACLE is WEB 2.0

    - by anca.rosu
    You never know what to expect in life, where it can take you and what kind of fulfillment it can offer you. It’s just like an amazing lottery with millions of winning tickets. My name is Paula, I am an Online Marketing Specialist at Oracle University and this is my story. Having graduated from a technical profile college, it seemed almost normal to follow the same career path. But I said no. I wanted to try something else, so I took an Advertising Masters Program and I really became in love with this entire industry. Advertising and the new impact of the Internet through social networking is my current fascination. I knew I had to work to incorporate both my skills intro one dream job. I want to believe that I have come to work at Oracle as part of a great plan that life has for me. It’s not the most glamorous job in advertising or in the fashion industry, but it’s everything you need to start investing in your development and to build relationships. A normal day at work begins at 9.30 at our Oracle Office in Bucharest. After a short chit-chat, coffee and some conference calls, marketing gets to work! Some of the members of my team are working besides me but others are based all over Europe. This is extremely useful when coordinating the EMEA Marketing for Oracle University, because this way it’s easier to keep an eye on these various locations. Even though it’s a team play, you need to speak up and make your mark. I am the kind of person that never stands-by and waits to be given directions, I am curious and intuitive. This makes things easier. In Oracle you really need to find your own way and to discover how to organize your time and how to get involved with people. People to people, this is the focus. But everything is up to you and it strongly depends on the type of personality that you have. I try to get involved in various activities, participate in Oracle Days Events, interact and meet all kinds of people. For those who are newly graduates or interns, Oracle has lots of trainings and webcasts you can attend to help you develop your career shape and to understand better the way the business works. You can also be awarded for ideas and setting the trends so that makes it worth it. What I like most about my job is the fact that I can come with ideas and bring them to life. For example Oracle University has a special seminar program called “Celebrity Seminars” where top industry speakers teach 1-day or 2-day condensed seminars. We thought of creating something exclusive and a video was the best idea. So my colleague and I became reporters for a day and interviewed this well-known speaker regarding his seminar. I think this is a good way to market this business. Live footage is a very good marketing tool so we are planning to use the video to target our online audiences via Facebook, Twitter or LinkedIn. This can even go in the newsletters that marketing sends regarding the Celebrity Seminars. This is what I meant when I said Oracle is a free spirited organization and you can surely find your place here among us. The best way to describe my job is WEB 2.0. The modern online approach comes to life while we are trying to sell our business. We need to be out there and we are responsible of spreading the buzz regarding our training offerings and our official courseware materials. There are so many new ways to interact with the target audience nowadays and I am so eager to discover the best online techniques! If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com Technorati Tags: WEB 2.0,Online Marketing,Oracle University,Bucharest,events,graduates,interns,training,webcast,seminar,newsletters,business,Facebook,Twitter,LinkedIn

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >