Search Results

Search found 5642 results on 226 pages for 'coding efficiency'.

Page 140/226 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • How do you coordinate with co-workers to give a balanced interview?

    - by goldierox
    My company has been conducting a lot of interviews lately for candidates with various experience levels, ranging from interns to senior candidates. We put our candidates through five 45 minute interview sessions where we try to ask a range of questions. One person always asks the same questions that test logic and communication. The rest typically split time between a whiteboard coding question and a discussion of previous projects, technologies the interviewee has worked with, and what he/she is looking for a job. Generally, we know the range of questions that other people on the loop will ask. Sometimes we switch things up and end up having redundancies. Today, 3 interviewers asked tree-related questions. Other times, we've all honed in on the same project on a resume and have had the interviewee talk about it with everyone. I think a smooth interview process would help us learn more about the candidate while giving the impression to the candidate that we have our act together as a team. How do you coordinate with others in the interview loop to give a balanced interview?

    Read the article

  • Good quality Secure Software Development Training [closed]

    - by Patrick
    Just had my annual appraisal and found out my company is willing to pay for training and exams etc! Woohoo (they kept that one quiet). I'm interested in doing a course on secure development techniques. Has anyone got any suggestions for good quality distance learning courses in secure development (I could probably get a couple of days off to attend a conference/ course if required)? We're mostly an MS .Net house but I have no particular allegiance to MS or any other programming language (though, obviously, C++ is the best language in the world). I have 12 years development experience working in (what are now) PCI:DSS environments, including designing and developing a key management system and I have some knowledge of basic attacks (XSS, injection etc). I would prefer a hard course I struggle with to a basic course I learn 3 things from (but hopefully get something right at my level). A quick google found these two course which look good: http://www.sans.org/course/secure-coding-net-developing-defensible-applications https://www.isc2.org/csslpedu/default.aspx I don't really know how to choose between them, and finding other courses isn't going to make that job any easier, so I thought I'd ask those who know. EDIT : Hmm, care to share the reason for your down vote, will help me learn how to use the site better...

    Read the article

  • Are you satisfied with your programming? [closed]

    - by Richart Bremer
    If you are a programmer, are you satisfied with it? I really love to code. I code all kinds of things. I used to play computer games but they are not that interesting compared to developing a new search algorithm or similar. But sometimes I look into the future and see myself being 80 years old, sitting in front of a computer and everything I will have written will be rewritten because the programming languages do not exist anymore. I look back on my life and think "that's it?". Everything I wrote in the past is virtual and ultimately gone. I tried other things but coding is the only thing that does it for me. And at the same time I think I am wasting my life. What about you? Disclaimer: I presume this is the best forum for this question. If you don't agree suggest better place to migrate the question. If you can't, don't close it. Thank you.

    Read the article

  • Learning project Custom c# Cms [closed]

    - by user313378
    I want to start new project customCms, cause I think it's a good starting point to implement my collected knowledge from c#, ddd, nhibernate, mvc3, js. It will be great if I hear some guidlines from expirienced users here. I will use C# ASP.NET MVC3 razor view engine. Also I was thinking of NHibernate ORM, I dont know if using Nhibernate will cause performanse down. Initially MSSQL 2008 will be used, but using ORM layer cause that I can switch to some other db with no pain. I was thinking to create News entity which will have properties Id Name Created Updated IntroText Content Title Author ListPhotos Every input will be validated with untroub. java script on the view, and it will be validated on db level as well. Maybe it is best approach to create some interface which will be implemented by my cmsClient entity like NewsEntity. In this interface will be included everything it should be requested from my client in future. At least some stuff which are not included in entity right now, consumed data by rss feed, wcf, etc. So basically everything you think its good idea from documentating project, to coding. Everyone is welcomed to brainstorm for custom cms.

    Read the article

  • Selectively Including files in C#.net web application [migrated]

    - by segnosaur
    I am attempting to modify an application with the following characteristics: Written in C#.net Using Visual Studio 2010 The application uses a Master sheet to maintain commonality The Master sheet has the following: <%@ Master Language="C#" AutoEventWireup="true" CodeFile="mysheet.master.cs" Inherits="master_mysheet" %> Now, currently, the master sheet has an include file that brings in a common footer: #include file="inc/my-footer.inc" Here's what I want to do: I would like to modify the master sheet to be able to read in a footer based on the value contained in a session variable... i.e. (not real code, but just something to give an idea of what I want) if session("x") = "a" then #include file="inc/my-footer1.inc" else #include file="inc/my-footer2.inc" My first instinct was to go with some vbscript: <script type="text/vbscript" language="vbscript"> document.write("vbscript example.") </script> However, it doesn't run the vbscript code automatically on page load. Does anyone know: - The syntax I need to actually get this to work? i.e. to get the vbscript to run automatically on page load, AND to do the page include? - Or, is there a better way to go about this? (perhaps by doing some coding in C#) Note: I am experienced in C#; however, I haven't done any vbscript since the days of ASP classic, so my knowledge there is out of date.

    Read the article

  • Problem with understanding how to start

    - by Coolface
    Okay, this might be a little off-topic but i try anyway. Sorry to bother. So i'm working as sysadmin for at least 5 years now and i quite enjoy IT field in general. Somehow i was never interested in programming much but always want to learn something at least easy and for personal usage. As sysadmin i need scripting skills so learn shell scripting without much problems, i also try to learn pascal, delphi, basic over time and must recent was python. Well, my problem is when i try to learn programming i just can't apply what i learn from the books to the real word. What i mean is i understand there are data structures, algorithms, variables, lib's, if-then logic, etc. but i just can't understand how to apply this things when i want to do real things. Like i want to get a something simple as parse web page, i draw a quick algorithm like get a web page, find a word on it and write a to file, on the paper everything look simple but when i get to the coding i just stuck pretty much from the start. I try read code of the real programs that just totally confusing especially big parts with many classes so i'm just quickly lost a trail what this code do. I think i just lack some fundamentals to see a big picture but don't really know what this might be? Or maybe i just don't have a passion to programming at all... My best bet was a shell scripting so i have really no problems to write complex scripts but this just not enough. Recently i was read around 5 or 6 python books because everyone say it's so easy even kid can code something but still no much luck, python is good and easy but i can't make something harder then a prodecurial style code like in bash for easy things but when i want harder things i'm still stuck. In college i was also not a math and tech guy and like to study non-tech stuff mostly like economy, psychology maybe that my problem? Anyway any advice would be greatly appriciated.

    Read the article

  • Why using Fragments?

    - by ahmed_khan_89
    I have read the documentation and some other questions' threads about this topic and I don't really feel convinced; I don't see clearly the limits of use of this technique. Fragments are now seen as a Best Practice; every Activity should be basically a support for one or more Fragments and not call a layout directly. Fragments are created in order to: allow the Activity to use many fragments, to change between them, to reuse these units... == the Fragment is totally dependent to the Context of an activity , so if I need something generic that I can reuse and handle in many Activities, I can create my own custom layouts or Views ... I will not care about this additional Complexity Developing Layer that fragments would add. a better handling to different resolution == OK for tablets/phones in case of long process that we can show two (or more) fragments in the same Activity in Tablets, and one by one in phones. But why would I use fragments always ? handling callbacks to navigate between Fragments (i.e: if the user is Logged-in I show a fragment else I show another fragment). === Just try to see how many bugs facebook SDK Log-in have because of this, to understand that it is really (?) ... considering that an Android Application is based on Activities... Adding another life cycles in the Activity would be better to design an Application... I mean the modules, the scenarios, the data management and the connectivity would be better designed, in that way. === This is an answer of someone who's used to see the Android SDK and Android Framework with a Fragments vision. I don't think it's wrong, but I am not sure it will give good results... And it is really abstract... ==== Why would I complicate my life, coding more, in using them always? else, why is it a best practice if it's just a tool for some cases? what are these cases?

    Read the article

  • Possible to create "fake" forum (for prototyping) using html, javascript, jquery, css? [closed]

    - by htmlNewbie
    I am trying to figure out if it might be possible to create a small forum without any use of a database and php coding. I have created a small (local and will only be local) webpage with a couple of menus. I have a forum button which will take me to another .html location. Here i would like to create something that looks like a forum and which you somewhat could interact with like a forum, without any database or PHP. I would probably want/need a form with a heading and text input. When i have given some input, i want it to be displayed as a thread, probably on top of the other threads (which will have to be created beforehand). When I refresh the forum will obviously be set to default, without saving what i just entered since I'm not using a database to save any data. So the new posts will not be saved, just displayed, neatly, when submitted. I'm doing this webpage with forum just as a prototype, and that is why it doesn't have to work as a professional forum. :) Would be very thankful for some tips, tricks, ideas or links to helpful threads.

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user12707
    I'm working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. I am currently coding in MATLAB. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to be either real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 20 and SD 40. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do it. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Looking for some advice on the next steps to take [closed]

    - by mopsyd
    I am looking for some advice on the next step to take in development of my programming skills. I was directed here when asking this question on Stack Overflow. What I know already Have a solid grasp of xhtml, xml, php, javascript, MySQL, actionscript. Have a working knowledge of vb, and have a slight grasp of java from tinkering with a minecraft server. Some brief exposure to the Unreal Engine in college. Some skills with sql server, ms sql, office integration, etc. Also some knowledge of Asterix and PBX/VOIP. Been coding off and on since the age of 8 but I have no computer science education aside from what I have taught myself or learned from work/freelance. I work in OSX mostly, but can use/troubleshoot windows and ubuntu fluently also. Decent with both UNIX and DOS CLI. What I'm considering I'm looking to learn a scripting language to build web apps, help streamline my home server that I am building and run shell scripts. Being able to help code games later is a big plus. My Question Between java, ruby, perl, and python, which would be the best investment of my time considering what I already know and what direction I would like to take my skillset? What are good resources for your suggested direction? Thanks in advance.

    Read the article

  • Licensing a website's code [on hold]

    - by RosiePea
    I just changed to a new contract that I want to use with all my future clients. I love this contract. It's in plain English, very readable, very understandable. It has this statement regarding ownership of the website after it's been paid for: After any outstanding balance for the project is paid, we will assign to you all copyrights in the graphical and visual elements of the design that we will create under the scope of this project. However, we will retain the copyright to all coding elements, but will provide you with a license for you to use these elements in the deliverables of this project. What is this license of which it speaks? I understand the concept: I maintain all rights to my code but allow them to use it in this particular website. That part's new in this contract, and I like it a lot. But now... what? I have to come up with a license to hand the client when the website is paid for. But which license? And do I physically (or electronically) give them something, a document kind of like the contract itself? I've been reading all about licenses all day today and I'm no closer to answering this question. Any words of advice out there?

    Read the article

  • What is the most effective approach to learn an unfamiliar complex program? [closed]

    - by bdroc
    Possible Duplicate: How do you dive into large code bases? I have quite a bit of experience with different programming languages and writing small and functional programs for a variety of purposes. My coding skills aren't what I have a problem with. In fact, I've written a decent web application from scratch for my startup. However, I have trouble jumping into unfamiliar applications. What's the most effective way to approach learning a new program's structure and/or architecture so that I can start attacking the code effectively? Are there useful tools for their respective languages (Python and Java are my two primary languages)? Should I be starting with just looking at function names or documentation? How do you veterans approach this problem? I find this has to be with minimal help from coworkers or contributors who are already familiar with the application and have better things to do than help me. I'd love to practice this skill in an open source project so any suggestions for starting points (maybe mildly complex) would be great too!

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • Catch Oracle Today and Tomorrow at Forrester’s Customer Experience Forum 2012 East

    - by Christie Flanagan
    Continuing our coverage of the customer experience revolution this week, don’t miss a chance to catch up with Oracle at Forrester’s Customer Experience Forum 2012 East today and tomorrow in New York City. The theme for this year’s Forum is “Outside In: The Power Of Putting Customers At The Center Of Your Business” and will take a look at important questions surrounding how to transform your company in order to take best advantage of the customer experience revolution: Why is customer experience the greatest untapped source of cost savings and increased revenue today? What is the key to understanding and taking control of your customer experience ecosystem? What are the six essential customer experience disciplines? Which companies have adopted best-in-class customer experience practices? How do customer experience strategies drive differentiating activities and processes at top companies? Which organizations appoint a chief customer officer to lead their customer experience efforts? What is the future of customer experience? How can you design an enterprise wide customer experience? How can you measure the results of your customer experience efforts? As a gold sponsor of the event, there will be a numbers of ways to interact with Oracle while you’re attending the Forum.  Here are some of the highlights:Oracle Speaking SessionTuesday, June 26, 2:10pm – 2:40pmThe Customer And YOU — Today’s Winners Are Defined By Customer ExperienceAnthony Lye, Senior Vice President of Customer Relationship Management, OracleCome hear Anthony Lye, Senior Vice President of Customer Relationship Management at Oracle, explain how leading companies are investing in customer experience solutions to enrich all interactions between a customer and their company. He will discuss Oracle's vision for transforming your customer engagement, insight, and execution into a connected, personalized, and rewarding experience across all touchpoints and interactions. He will demonstrate how great customer experiences generate real business results by attracting more customers, retaining more customers, and generating more sales while improving operational efficiency.Solution ShowcaseTuesday, June 26th9:45am - 10:30am - Morning Networking Break in the Solutions Showcase11:45am – 1:15pm - Networking Lunch an Dessert in the Solutions Showcase2:40pm – 3:25pm - Afternoon Break in the Solutions Showcase5:30pm – 7:00pm - Networking Reception in the Solutions ShowcaseWednesday, June 27th9:45am - 10:30am - Morning Networking Break in the Solutions Showcase12:20pm -1:20pm - Networking Lunch and Dessert in the Solutions ShowcaseWe hope to see you there! Webcast: Learn How Ancestry.com Delivers Exceptional Online Customer Experience with Oracle WebCenterDate: Thursday, June 28, 2012Time: 10:00 AM PDT/ 1:00 PM EDT Ancestry.com is the world’s largest online family history resource, providing an engaging customer experience to more than 1.7 million members. With a wealth of learning resources and a worldwide community of family history enthusiasts, Ancestry.com helps people discover their roots and tell their family stories. Key to Ancestry.com’s success has been the delivery of an online customer experience that converts site visitors into paying subscribers and keeps them coming back. Register now to learn how Ancestry.com delivers an exception customer experience using Oracle WebCenter Sites. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Developer Training – Importance and Significance – Part 1

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Beginning in Real World However, many former-students will be disappointed to find out that once they become employees, learning is not over.  Many companies are discovering the importance and benefits to training their employees.  You can breathe a sigh of relief, though, because much for this kind of training there are not usually tests! We often think that we go to school for our younger years so that we do all our learning all at once, and then for the rest of our lives we use that knowledge.  But in so many cases, but especially for developers, the opposite is true.  It takes many years of schools to learn the basics of a field, and then our careers are spent learning to become experts. For this, and so many other reasons, training is very important.  Example one: developer training leads to better employees.  A company is only as good as the people it employs, and one way to ensure that you have employed the right candidate is through training.  Training can take a regular “stone” and polish it into a “diamond.”  Employees who have been well-trained will be better at their jobs and produce a better product. Most Expensive Resource Did you know that one of the most expensive operating costs for any company is not buying goods, or advertising, but its employees – especially having to hire new employees.  Bringing in new people, getting them up to speed, and providing them with perks to attract them to a company is a huge cost for companies.  So employee retention – keep the employees you already have, and keeping them happy – is incredibly important from a business aspect.  And research shows that a well-trained employee is a happy employee.  They feel more confident in their job, happier with their position, and more cared-about – and therefore less likely to leave in search of a better job.  Employee training leads to better retention. Good Moral On the subject of keeping employees happy in order to keep them at a company, the complement to that research shows that happier employees are more efficient and overall better at their jobs.  You don’t have to be a scientist to figure out why this is true.  An employee who feel that his company cares about him and his educational future will work harder for the company.  He or she will put in that extra hour during the busy season that makes all the difference in the end.  Good morale is good for the company. If good morale is better for the company, you know that it goes hand-in-hand with something even better – better efficiency.  An employee who is well trained obviously knows more about their job and all the technical aspects.  That means when a problem crops up – and they inevitably do – this employee will be well-equipped to deal with that problem with fewer problems, and no need to go searching for help from higher up.  When employees are well trained, companies run more smoothly. A Better Product Of course, all of these “pros” for employee training are leading up to the one thing that companies truly care about – a better product.  We have shown that employees who have been trained to be competitive in the market are happier at the company, they are more efficient, and their morale is better.  The overall result is that the company’s product – whether it is a database, piece of equipment, or even a physical good – is better.  And a better product will always be more competitive on the market. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Tuesday, January 11, 2011

    CodePlex Daily Summary for Tuesday, January 11, 2011Popular ReleasesArcGIS Editor for OpenStreetMap: ArcGIS Editor for OpenStreetMap 1.1 beta2: This is the beta2 release for the ArcGIS Editor for OpenStreetMap version 1.1. Changes from version 1.0: Multi-part geometries are now supported. Homogeneous relations (consisting of only lines or only polygons) are converted into the appropriate multi-part geometry. Mixed relations and super relations are maintained and tracked in a stand-alone relation table. The underlying editing logic has changed. As opposed to tracking the editing changes upon "Save edit" or "Stop edit" the changes a...VSSpeedster - Parallel Builds for VS: VSSpeedster 1.2 (beta): - Improved Parallel Builds - Cancel running Parallel Build using Ctrl+BreakASP.NET Comet Ajax Library (Reverse Ajax - Server Push): Multiple server ASP.NET Reverse Ajax: This sample project demonstrates how is easy to scale your web applications via PokeInHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...EnhSim: EnhSim 2.3.0: 2.3.0This release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Changed how flame shoc...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...sNPCedit: sNPCedit v0.9d: added elementclient coordinate catcher to catch coordinates select a target (ingame) i.e. your char, npc or monster than click the button and coordinates+direction will be transfered to the selected row in the table corrected labels from Rot to Direction (because it is a vector)Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 beta Released: Hi, Today Visifire is released along with one new feature * Inlines property has been implemented in Title. Now onwards you can customize the text content in Title. Please check out the Visifire documentation for more information. This release contains fix for the following bugs: * Styles for chart elements were not working as expected. * Bar chart was not drawn properly if AxisMinimum property was set to a value above zero base line. * In DateTime axis, AxisLables were no...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.StyleCop for ReSharper: StyleCop for ReSharper 5.1.14980.000: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the correct path ...VivoSocial: VivoSocial 7.4.1: New release with bug fixes and updates for performance.UltimateJB: Ultimate JB 2.03 PL3 KAKAROTO + HERMES + Spoof 3.5: Voici une version attendu avec impatience pour beaucoup : - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 2.43 !!! Conclusion : - UltimateJB203PSXXXDEFAULTKAKAROTO=> Pas de spoof mais disponible pour les PS3 suivantes : 3.41_kiosk 3.41 3.40 3.30 3.21 3.15 3.10 3.01 2.76 2.70 2.60 2.53 2.43 - UltimateJB203PS341_HERMES => Pas de spoof mais version hermes 4b - UltimateJB203PS341HERMESSPOOF35X => hermes 4b + spoof des firmwares 3.50 et 3.55 au li....NET Extensions - Extension Methods Library for C# and VB.NET: Release 2011.03: Added lot's of new extensions and new projects for MVC and Entity Framework. object.FindTypeByRecursion Int32.InRange String.RemoveAllSpecialCharacters String.IsEmptyOrWhiteSpace String.IsNotEmptyOrWhiteSpace String.IfEmptyOrWhiteSpace String.ToUpperFirstLetter String.GetBytes String.ToTitleCase String.ToPlural DateTime.GetDaysInYear DateTime.GetPeriodOfDay IEnumberable.RemoveAll IEnumberable.Distinct ICollection.RemoveAll IList.Join IList.Match IList.Cast Array.IsNullOrEmpty Array.W...EFMVC - ASP.NET MVC 3 and EF Code First: EFMVC 0.5- ASP.NET MVC 3 and EF Code First: Demo web app ASP.NET MVC 3, Razor and EF Code FirstVidCoder: 0.8.0: Added x64 version. Made the audio output preview more detailed and accurate. If the chosen encoder or mixdown is incompatible with the source, the fallback that will be used is displayed. Added "Auto" to the audio mixdown choices. Reworked non-anamorphic size calculation to work better with non-standard pixel aspect ratios and cropping. Reworked Custom anamorphic to be more intuitive and allow display width to be set automatically (Thanks, Statick). Allowing higher bitrates for 6-ch....NET Voice Recorder: Auto-Tune Release: This is the source code and binaries to accompany the article on the Coding 4 Fun website. It is the Auto Tuner release of the .NET Voice Recorder application.BloodSim: BloodSim - 1.3.2.0: - Simulation Log is now automatically disabled and hidden when running 10 or more iterations - Hit and Expertise are now entered by Rating, and include option for a Racial Expertise bonus - Added option for boss to use a periodic magic ability (Dragon Breath) - Added option for boss to periodically Enrage, gaining a Damage/Attack Speed buffASP.NET MVC CMS ( Using CommonLibrary.NET ): CommonLibrary.NET CMS 0.9.5 Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 4.0. ActiveRecord based components for Blogs, Widgets, Pages, Parts, Events, Feedback, BlogRolls, Links Includes several widgets ( tag cloud, archives, recent, user cloud, links twitter, blog roll and more ) Built using the http://commonlibrarynet.codeplex.com framework. ( Uses TDD, DDD, Models/Entities, Code Generation ) Can run w/ In-Memory Repositories or Sql Server Database See Documentation tab for Ins...New Projects.NET Event Spy: Full information available here: http://martincarolan.blogspot.com/2011/01/secret-project.html Simple development/debugging tool that hooks into and monitors events raised on any .NET object3DTweet: 3Dtweet is an effort to make tweets appear in a aesthetic manner to the users of windows phone.Its developed using VS2010 expresss.Agile .NET with SCRUM and XP: Source code for the book Apress Professional Agile .NET Development with SCRUM and XPBeskid Niski Agroturystyka: Travel Poland, turystyka w beskidzie niskim. Agroturystyka w miejscowosci Losie nad zalewem KlimkowkaCoding better: A better coding labs for .net new feature.FBApp: A simple facebook app I was busy with over the holidays as an experiment to try out the facebook api. Is currently not complete but I wanted to get some criticism on it for my 1st web app. It is developed using WPF and C#. Freemium Helper for WebMatrix: The Freemium Helper for WebMatrix provides an easy way to apply the Freemium model into your WebMatrix site. Using different user groups (or roles), it allows you to easily enable or disable features on your pages depending on the stock-keeping unit the user has paid for.Haversine Distance Calculation: A very small project that implements the Haversine formula, which calculates the great circle distance between two points on the earth's surface. The points are latitude / longitude coordinates in DD. The formula is implemented client side with javascript and server side with C#.Hexa.Core: Hexa.Core is our implementation of the Domain Driven Design Architecture. Also providing a set of helper classes for ASP.Net and WCF development.Minecraft NBT reader: A simple Minecraft NBT reader.MobSoft: MobSoft is silverlight based news related application designed to test the new functionalitities in Silverlight 4netduino Helpers: The 'netduino Helpers' is a C# driver set for common hardware components and features convenient wrappers around complex .Net Micro Framework features such as: Analog joysticks, Real-time clock, 8*8 LED matrix, Shift register, runtime assembly & resource loader, bitmaps, etc.NewsGator Social Connectors for Sharepoint 2010: This project contains social connectors for the NewsGator SharePoint platform and supports sending messages to Twitter and LinkedIn just by putting tags in the text #li for sending to linkedin #tw for sending to twitterNon Profit Contact Relationship Management: A non profit contact relationship management software intended to help those in the non profit arena manage donors, sponsors, and prospects.OpenAGE: OpenAGE, short for Open Advance Game Engine, is aimed at developing a new Advanced Game engine strictly for the PC and Xbox360 gaming System using XNA 4.0, and Visual Studio 2010OpenAutoPoster: OpenAutoPoster automates some of the boring everyday tasks of aggregating, linking and posting that haunts content creators.Phefer WoodTurning Sketcher: Draw out your own turnings before you hit the wood. Import images and trace around them, print them out with the length and width measuresments.Simple Script Interpreter- A simple GPLEX/GPPG (Lex/Yacc) Primer: Simple Script is a simple implementation of an interpreter language built with GPlex and Gppg (Lex/Yacc). It's developed in C#.SP2010Tutorials: Code for learning SharePoint 2010The Social Developer: This is a social developer tool for programmers to create and share projects using the .Net framework and other technologies and integrate it into a socialistic approach of sharing the work load and the resources needed to develop high level applications. Traffic-sign Classification: Traffic sign shape classification and localization.unnamedyet: Experimental! para Investigadores de Sistemas. Objetivo! desarrollar una praxis tal que con un conjunto finito y discreto de términos para describir sea posible auto-demostrar y ejecutar cualquier proposición dada.VSSpeedster - Parallel Builds for VS: Improve the performance of your Visual Studio: - Parallel Builds integrated in visual studioWebservice Xslt Transformer WebPart for SharePoint 2010: The Dynamic Webservice Xslt Transformer WebPart makes it much easier for SharePoint Developers and Administrators to call the webservice and transform the results directly to HTML by providing their own custom xslt. The properties can be set on the webpart by using the UI.WilWaNet.HASH: An ASP.NET MVC web site designed for tracking nutrition for the purposes of losing weight. Tracks calories, fat calories, fat grams and saturated fat along with daily weight and exercise. Includes daily Basic Metabolic Rate calculation and graphing functions.WP7 Try it 01: The first try in wp7WPF TryIt 01: Quan ly Nhan khau WPF ApplicationWX Alerter CAP/XML: NWS Alerter using CAP 1.1 alerting protocol. The goal of this project is to consume weather alerts from the NWS site. The user will select the city or SAME code/zone to watch. As alerts trigger notices will display and info will fill the Alert Tab.

    Read the article

  • Where Next for Google Translate? And What of Information Quality?

    - by ultan o'broin
    Fascinating article in the UK Guardian newspaper called Can Google break the computer language barrier? In it, Andreas Zollman, who works on Google Translate, comments that the quality of Google Translate's output relative to the amount of data required to create that output is clearly now falling foul of the law of diminishing returns. He says: "Each doubling of the amount of translated data input led to about a 0.5% improvement in the quality of the output," he suggests, but the doublings are not infinite. "We are now at this limit where there isn't that much more data in the world that we can use," he admits. "So now it is much more important again to add on different approaches and rules-based models." The Translation Guy has a further discussion on this, called Google Translate is Finished. He says: "And there aren't that many doublings left, if any. I can't say how much text Google has assimilated into their machine translation databases, but it's been reported that they have scanned about 11% of all printed content ever published. So double that, and double it again, and once more, shoveling all that into the translation hopper, and pretty soon you get the sum of all human knowledge, which means a whopping 1.5% improvement in the quality of the engines when everything has been analyzed. That's what we've got to look forward to, at best, since Google spiders regularly surf the Web, which in its vastness dwarfs all previously published content. So to all intents and purposes, the statistical machine translation tools of Google are done. Outstanding job, Googlers. Thanks." Surprisingly, all this analysis hasn't raised that much comment from the fans of machine translation, or its detractors either for that matter. Perhaps, it's the season of goodwill? What is clear to me, however, of course is that Google Translate isn't really finished (in any sense of the word). I am sure Google will investigate and come up with new rule-based translation models to enhance what they have already and that will also scale effectively where others didn't. So too, will they harness human input, which really is the way to go to train MT in the quality direction. But that aside, what does it say about the quality of the data that is being used for statistical machine translation in the first place? From the Guardian article it's clear that a huge humanly translated corpus drove the gains for Google Translate and now what's left is the dregs of badly translated and poorly created source materials that just can't deliver quality translations. There's a message about information quality there, surely. In the enterprise applications space, where we have some control over content this whole debate reinforces the relationship between information quality at source and translation efficiency, regardless of the technology used to do the translation. But as more automation comes to the fore, that information quality is even more critical if you want anything approaching a scalable solution. This is important for user experience professionals. Issues like user generated content translation, multilingual personalization, and scalable language quality are central to a superior global UX; it's a competitive issue we cannot ignore.

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Divya Malik
    A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on the Oracle Applications blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com

    Read the article

  • Major Analyst Report Chooses Oracle As An ECM Leader

    - by brian.dirking(at)oracle.com
    Oracle announced that Gartner, Inc. has named Oracle as a Leader in its latest "Magic Quadrant for Enterprise Content Management" in a press release issued this morning. Gartner's Magic Quadrant reports position vendors within a particular quadrant based on their completeness of vision and ability to execute. According to Gartner, "Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clearly articulated vision. In the context of ECM, they have strong channel partners, presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technology or vertical market. Leaders deliver a suite that addresses market demand for direct delivery of the majority of core components, though these are not necessarily owned by them, tightly integrated, unique or best-of-breed in each area. We place more emphasis this year on demonstrated enterprise deployments; integration with other business applications and content repositories; incorporation of Web 2.0 and XML capabilities; and vertical-process and horizontal-solution focus. Leaders should drive market transformation." "To extend content governance and best practices across the enterprise, organizations need an enterprise content management solution that delivers a broad set of functionality and is tightly integrated with business processes," said Andy MacMillan, vice president, Product Management, Oracle. "We believe that Oracle's position as a Leader in this report is recognition of the industry-leading performance, integration and scalability delivered in Oracle Enterprise Content Management Suite 11g." With Oracle Enterprise Content Management Suite 11g, Oracle offers a comprehensive, integrated and high-performance content management solution that helps organizations increase efficiency, reduce costs and improve content security. In the report, Oracle is grouped among the top three vendors for execution, and is the furthest to the right, placing Oracle as the most visionary vendor. This vision stems from Oracle's integration of content management right into key business processes, delivering content in context as people need it. Using a PeopleSoft Accounts Payable user as an example, as an employee processes an invoice, Oracle ECM Suite brings that invoice up on the screen so the processor can verify the content right in the process, improving speed and accuracy. Oracle integrates content into business processes such as Human Resources, Travel and Expense, and others, in the major enterprise applications such as PeopleSoft, JD Edwards, Siebel, and E-Business Suite. As part of Oracle's Enterprise Application Documents strategy, you can see an example of these integrations in this webinar: Managing Customer Documents and Marketing Assets in Siebel. You can also get a white paper of the ROI Embry Riddle achieved using Oracle Content Management integrated with enterprise applications. Embry Riddle moved from a point solution for content management on accounts payable to an infrastructure investment - they are now using Oracle Content Management for accounts payable with Oracle E-Business Suite, and for student on-boarding with PeopleSoft e-Campus. They continue to expand their use of Oracle Content Management to address further use cases from a core infrastructure. Oracle also shows its vision in the ability to deliver content optimized for online channels. Marketers can use Oracle ECM Suite to deliver digital assets and offers as part of an integrated campaign that understands website visitors and ensures that they are given the most pertinent information and offers. Oracle also provides full lifecycle management through its built-in records management. Companies are able to manage the lifecycle of content (both records and non-records) through built-in retention management. And with the integration of Oracle ECM Suite and Sun Storage Archive Manager, content can be routed to the appropriate storage media based upon content type, usage data or other business rules. This ensures that the most accessed content is instantly available, and archived content is stored on a more appropriate medium like tape. You can learn more in this webinar - Oracle Content Management and Sun Tiered Storage. If you are interested in reading more about why Oracle was chosen as a Leader, view the Gartner Magic Quadrant for Enterprise Content Management.

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Divya Malik
    A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on the Oracle Applications blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • CRMIT’s HIGH VALUE CRM++ PLUGINS FOR CRM On DEMAND

    - by Soumo Das
    Customer satisfaction and experience being the two most considerable factors, these days businesses are on the lookout for automation tools that are world class, agile and keep quality at its core. CRMIT has developed such tools using cutting edge technologies and abstracting industry best practices and R&D.  Self Service Portal  With customers being so meticulous about regular updates and reliable access to their data, administrators just cannot think of walking a thin line. Surviving without a resource that provides a track of customer requirements for services available 24 x 7 can severely affect the productivity. In such a scenario, CRMIT’s Self Service Portal (SSP) is the best solution. This not only tracks the required customer data, but also allows companies to stay in tune with their employees, vendors and stakeholders.   One can directly sign up to become a CRMOD contact and SSP user. One need not use the database, as operations and interactions are d at run time. This is a fully configurable solution that tracks results periodically, thus making it easy for end users. It also offers better security and data visibility that enables users to progress smoothly. Quote and Order Management   When dealing with quotes, contracts and orders becomes complicated, only Quote & Order Management can work as a one-stop solution. CRMIT offers this great tool for managing all this information and for taking care of customer orders and service requirements.  This CRM On Demand plug-in allows one to create a new quote or copy the existing one. Products can be directly added from the product list of CRMOD and the pricing is calculated automatically. Quote can be generated and mailed to the external users in PDF, HTML and XLS formats. This not only allows management of quotes in an enhanced manner, but also supports various billing and tax calculation features that make work effortless.    Report Scheduler  When it comes to analyzing and providing statistics of various business processes currently running in an organization, one cannot depend on manual updates, which sometimes may be inaccurate or even delayed. CRMIT provides a SaaS based powerful solution - Report Scheduler - that allows CRM users to schedule reports as per the frequencies and then receive them as email attachments at the scheduled time.   With this powerful tool, administrators can control the report scheduler for assigning specific reports to specific users. After that, users can login and schedule any assigned report for viewing at particular intervals on monthly, weekly or daily basis. Additionally, users can also copy the mail to external users and can choose the preferred format. The best part is that sharing business data with third party become easy with this and for viewing reports, users need not log into their CRMOD account.  CRM On Demand Offline Solution CRM On-Demand Offline is another great CRM++ extension that allows one to work in both online and offline modes. Synchronizing both the modes is absolutely easy and offers ease while working. CRM OD offline works as an automation tool that not only improves efficiency, but also works as a backup in most cases. It is readily available as a windows application installer and requires users to be online only while validating and synchronizing. The best part is that working in the offline mode also works as a backup. 

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >