Search Results

Search found 16436 results on 658 pages for 'people skills'.

Page 86/658 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • NHibernate many-to-many mapping

    - by Scozzard
    Hi, I am having an issue with many-to-many mapping using NHibernate. Basically I have 2 classes in my object model (Scenario and Skill) mapping to three tables in my database (Scenario, Skill and ScenarioSkill). The ScenarioSkills table just holds the IDs of the SKill and Scenario table (SkillID, ScenarioID). In the object model a Scenario has a couple of general properties and a list of associated skills (IList) that is obtained from the ScenarioSkills table. There is no associated IList of Scenarios for the Skill object. The mapping from Scenario and Skill to ScenarioSkill is a many-to-many relationship: Scenario * --- * ScenarioSkill * --- * Skill I have mapped out the lists as bags as I believe this is the best option to use from what I have read. The mappings are as follows: Within the Scenario class <bag name="Skills" table="ScenarioSkills"> <key column="ScenarioID" foreign-key="FK_ScenarioSkill_ScenarioID"/> <many-to-many class="Domain.Skill, Domain" column="SkillID" /> </bag> And within the Skill class <bag name="Scenarios" table="ScenarioSkills" inverse="true" access="noop" cascade="all"> <key column="SkillID" foreign-key="FK_ScenarioSkill_SkillID" /> <many-to-many class="Domain.Scenario, Domain" column="ScenarioID" /> </bag> Everything works fine, except when I try to delete a skill, it cannot do so as there is a reference constraint on the SkillID column of the ScenarioSkill table. Can anyone help me? I am using NHibernate 2 on an C# asp.net 3.5 web application solution.

    Read the article

  • C# -Fluent interface implementation Help

    - by nettguy
    I am implementing the following piece of code using Fluent Interface design in C# 3.0. The code is working fine. public interface ITrainable { ITrainable AddSkill(string _skill); } public interface ISearchSkill { ISearchSkill SearchSkill(SoftwareEngineer emp,string[] _skills); } public abstract class Person { public Person(){} protected string Name { get; set; } } public class SoftwareEngineer:Person,ITrainable { protected internal List<string> skillSet { get; set; } public SoftwareEngineer() { } public SoftwareEngineer(string name) { Name=name; skillSet = new List<string>(); } public ITrainable AddSkill(string _skill) { skillSet.Add(_skill); return this; } } public class HRExecutive :Person,ISearchSkill { SoftwareEngineer _employee; public HRExecutive() { _employee=new SoftwareEngineer(); } public ISearchSkill SearchSkill(SoftwareEngineer _employee,string[] skills) { this._employee= _employee; foreach (string _skill in skills) { if (_employee.skillSet.Contains(_skill)) { Console.WriteLine(Name + " is trained on " + _skill); } else { Console.WriteLine(Name + " is not trained on " + _skill); } } return this; } } Execution SoftwareEngineer emp1 = new SoftwareEngineer("JonSkeet"); emp1.AddSkill("java").AddSkill("C#").AddSkill("F#"); HRExecutive hr = new HRExecutive(); hr.SearchSkill(emp1, new string[] { "java", "C#" }). SearchSkill(emp1, new string[] { "Oracle", "F#" }); Question : I don't want the skillSet of SoftwareEngineer being accessed by some XXX class.It could be accessed by limited classes.But protected internal List<string> skillSet { get; set; } is the only option (i think) i can declare in order to access the skillSet from HRExecutive.If i do so other XXX class can still access it. How to rewrite the code to prevent it?

    Read the article

  • I'm graduating with a Computer Science degree but I don't feel like I know how to program.

    - by Wendy Peters
    I'm graduating with a Computer Science degree but I see websites like Stackoverflow and search engines like Google and don't know where I'd even begin to write something like that. During one summer I worked as a iPhone developer, but I felt like I was mostly gluing together libraries that other people had written with little understanding of what's happening underneath the hood. I'm trying to improve my knowledge by studying algorithms, but it is a long and painful process. I find algorithms difficult and at the rate I am working through my book it will a decade will have passed before I will finish. Given my current situation, I've spent a month looking for work but my skills (C, Python, Objective-C) are not so desirable in the local market, where C#, Java, and web development are much higher in demand. My GPA is ok (3.0) but it's not high enough to apply to the large companies or return for graduate studies and I don't have a good network of friends. Basically I'm graduating with a Computer Science degree but I don't feel like I know how to program. I thought that joining a company and programming full-time would give me a chance to develop my skills and learn from those more experienced than myself, but I'm struggling to find work and am starting to get really frustrated. I am going to cast my net wider and look beyond the city I've grown up in, but what have other people in similar situation tried to do?

    Read the article

  • In the meantime, to be or not to be ... productive!

    - by Jan Kuboschek
    I just moved back to Europe from the US after living there for 7 years. Apart from major adjustment issues, I'm currently looking for a job over here. I'm mainly interested in (IT) Consulting and, since these jobs typically require programming knowledge, such as Java, I'm trying to think of something productive to write (perhaps to demo my skills) while I'm waiting for my interviews (starting in two weeks. Folks here are a bit slower than in the US apparently...). I graduated from college about a year and a half ago and have a 4 year degree in international management/economics and about 3/4 of a 2 year degree in computer science finished. I've written my fair share of web software over the years, but nothing concrete that I could show, especially not in Java. Now, I've never had the problem of not having any idea what to write. Basic games I could write, but I'm not sure how well that'll come over when I walk into my interview and say "hey, I was bored. Take a look at my multiplayer space invaders game! Wanna try beating me??". Any thoughts? I browsed SourceForge the other day to find a nice little project to contribute to, but decided that I don't want to commit to someone else's project at this time. Any ideas, perhaps from someone who has been, or currently is, in a similar position would be much appreciated. Oh, and lastly: Instead of developing a program or two to demonstrate my skills, I could spend my time brushing up on UML and Perl. Any suggestions regarding that? Writing a demo vs. learning something new?

    Read the article

  • Mgmt wants to re-title my position: Any help...? [closed]

    - by JohnFlyTN
    Management here wants to re-title my position, since I'm doing quite a bit of different work than was originally planned. They want my input. After a quick glance over my skill set and job duties, what would we need to describe this position as? I'll just list things I'm at least proficient in, I will not list things I have a passing knowledge of. About me : ~10 years software development. Languages : C, C++, Perl, PHP, C#, TCL, Unix shell scripting, SQL (TSQL, PLSQL) Systems : MS-Dos, Windows 3.1 to 7 for client, NT 4 to 2008 for server, OS/2, IBM MVS & z/OS, Linux ( multiple distros), AIX Current position: I do all sorts of in-house software. The range is single user apps to large systems spanning multiple OS's. One of the larger projects I've designed and coded is about 100k lines of C#, and a database where I have been the sole designer and maintainer. I have near total freedom to design as I see fit, restraints are usually budgetary. Skills required to replace me in my current role: Windows and Unix admin, Database design, .NET up to 3.5 (C#, ASP.NET), C++, Perl, good skills in designing large and efficient data processing systems. Given this small level of information what would you see this as being titled? (is more information required to render a decision?)

    Read the article

  • complex regular expression task

    - by Don Don
    Hi, What regular expressions do I need to extract section title(s) in a text file? So, in the following sample text, I'd like to extract "Communication and Leadership" "1.Self-Knowledge" "2. Humility" "(3) Clear Thinking". Many thanks. Communication and Leadership True leaders understand that, rather than forcing their followers into a preconceived mold, their job is to motivate and organize followers to collectively accomplish goals that are in everyone's interests. The ability to communicate this to co-workers and followers is critical to the effectiveness of leadership. 1.Self-Knowledge Superior leaders are able to devote their skills and energies to leadership of a group because they have worked through personal issues to the point where they know themselves thoroughly. A high level of self-knowledge is a prerequisite to effective communication skills, because the things that you communicate as a leader are coming from within. 2. Humility This subversion of personal preference requires a certain level of humility. Although popular definitions of leaders do not always see them as humble, the most effective leaders actually are. This humility may not be expressed in self-effacement, but in a total commitment to the goals of the organization. Humility requires an understanding of one's own relative unimportance in comparison to larger systems. (3) Clear Thinking Clarity of thinking translates into clarity of communication. A leader whose goals or personal analysis is muddled will tend to deliver unclear or ambiguous directions to followers, leading to confusion and dissatisfaction. A leader with a clear mind who is not ambivalent about her purposes will communicate what needs to be done in a s traightforward and unmistakable manner.

    Read the article

  • What choices do I have for a future in software development?

    - by user354892
    After graduating from college with a degree in mathematics and a minor in computer science, I took over a year to find my first programming job. I very much enjoy my work environment, but sometimes I feel like I'm not developing professionally as quickly as I am capable of. My company does mostly web programming. The project that I work on is basically a front-end for a SQL database. The entire project is (AFAIK) coded is VB.NET. I've been working on the back-end logic of the program. After some time on the job, we decided that designing web pages is not my cup of tea. It seems like the thing that I spend more time on than anything else is searching the web and SO for information about poorly documented .net and web APIs. This does not make me happy; is this normal for a programming job; I don't think it is. It really hit me the other day when i saw a question about math asked here and I didn't know the answer; I feel like my knowledge is going stale! I was previously almost hired by another company where they design graphic-intensive battle simulations for the military. I sometimes wonder what my life might be like if I had that job. I feel like my math and problem solving skills might have been a better fit at this other company. Within the wide field of software development, there are a number of directions in which to go as evidenced by the huge variety of topics discussed here on StackOverflow. I would like to feel like I'm going in the direction where I will make the most of my skills and have a satisfying career. Let me word this question as clearly as possible: given the wide breadth of the field of software development, how does it break down into sub-fields and what are the considerations for developing a software development career. I do not want to manage my career by default.

    Read the article

  • Should I be worried if I don't get any internships by 3rd year's end?

    - by karamba
    I am in some mediocre college in some corner of India. Am about to complete 3rd year C.S. in a month and a half. I have no idea how to go about "finding an internship" as everyone seems to put it. Looking at online advice, I find that a primary way is to "use your contacts". I am sad to say that I don't have many friends(those I have, I am trying to get help from them for all it's worth), and my family can't help me as they have no idea about the software industry. My college has no official facility for aiding students in this, and the few faculty members who had contacts in "whatever" part of the industry have favoured some students that they have personally come to know. (Though I hear that the "internships" they got involve them stocking equipment in some small companies.... still it's something?) I'm getting nervous. I am considering just spending the coming summer refining my skills in C++ and begin learning MySQL and C#, both of which I have zero experience in. Maybe work on my own project... like a library management system. Relative to those in my college, I think I am among the best programmers there, but that isn't saying much as a lot of students can barely write basic code. I have experience in teaching myself C++, and DirectX9 having created a Tetris clone, some basic 3D apps (bouncing balls), and a basic console-based, text-file-database-using library management system (which I plan to improve this summer). Is it alright if I spend my summer so? Will I be able to get a job later on? I know I have to improve my social skills to get anywhere in life, and I will try, but say I am stuck like this till 4th year's end... will such self studying, online learning help me in landing a decent job? Perhaps after I have learned a bit more, joining some open source project?

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • Ajax Control Toolkit and Superexpert

    - by Stephen Walther
    Microsoft has asked my company, Superexpert Consulting, to take ownership of the development and maintenance of the Ajax Control Toolkit moving forward. In this blog entry, I discuss our strategy for improving the Ajax Control Toolkit. Why the Ajax Control Toolkit? The Ajax Control Toolkit is one of the most popular projects on CodePlex. In fact, some have argued that it is among the most successful open-source projects of all time. It consistently receives over 3,500 downloads a day (not weekends -- workdays). A mind-boggling number of developers use the Ajax Control Toolkit in their ASP.NET Web Forms applications. Why does the Ajax Control Toolkit continue to be such a popular project? The Ajax Control Toolkit fills a strong need in the ASP.NET Web Forms world. The Toolkit enables Web Forms developers to build richly interactive JavaScript applications without writing any JavaScript. For example, by taking advantage of the Ajax Control Toolkit, a Web Forms developer can add modal dialogs, popup calendars, and client tabs to a web application simply by dragging web controls onto a page. The Ajax Control Toolkit is not for everyone. If you are comfortable writing JavaScript then I recommend that you investigate using jQuery plugins instead of the Ajax Control Toolkit. However, if you are a Web Forms developer and you don’t want to get your hands dirty writing JavaScript, then the Ajax Control Toolkit is a great solution. The Ajax Control Toolkit is Vast The Ajax Control Toolkit consists of 40 controls. That’s a lot of controls (For the sake of comparison, jQuery UI consists of only 8 controls – those slackers J). Furthermore, developers expect the Ajax Control Toolkit to work on browsers both old and new. For example, people expect the Ajax Control Toolkit to work with Internet Explorer 6 and Internet Explorer 9 and every version of Internet Explorer in between. People also expect the Ajax Control Toolkit to work on the latest versions of Mozilla Firefox, Apple Safari, and Google Chrome. And, people expect the Ajax Control Toolkit to work with different operating systems. Yikes, that is a lot of combinations. The biggest challenge which my company faces in supporting the Ajax Control Toolkit is ensuring that the Ajax Control Toolkit works across all of these different browsers and operating systems. Testing, Testing, Testing Because we wanted to ensure that we could easily test the Ajax Control Toolkit with different browsers, the very first thing that we did was to set up a dedicated testing server. The dedicated server -- named Schizo -- hosts 4 virtual machines so that we can run Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, and Internet Explorer 9 at the same time (We also use the virtual machines to host the latest versions of Firefox, Chrome, Opera, and Safari). The five developers on our team (plus me) can each publish to a separate FTP website on the testing server. That way, we can quickly test how changes to the Ajax Control Toolkit affect different browsers. QUnit Tests for the Ajax Control Toolkit Introducing regressions – introducing new bugs when trying to fix existing bugs – is the concern which prevents me from sleeping well at night. There are so many people using the Ajax Control Toolkit in so many unique scenarios, that it is difficult to make improvements to the Ajax Control Toolkit without introducing regressions. In order to avoid regressions, we decided early on that it was extremely important to build good test coverage for the 40 controls in the Ajax Control Toolkit. We’ve been focusing a lot of energy on building automated JavaScript unit tests which we can use to help us discover regressions. We decided to write the unit tests with the QUnit test framework. We picked QUnit because it is quickly becoming the standard unit testing framework in the JavaScript world. For example, it is the unit testing framework used by the jQuery team, the jQuery UI team, and many jQuery UI plugin developers. We had to make several enhancements to the QUnit framework in order to test the Ajax Control Toolkit. For example, QUnit does not support tests which include postbacks. We modified the QUnit framework so that it works with IFrames so we could perform postbacks in our automated tests. At this point, we have written hundreds of QUnit tests. For example, we have written 135 QUnit tests for the Accordion control. The QUnit tests are included with the Ajax Control Toolkit source code in a project named AjaxControlToolkit.Tests. You can run all of the QUnit tests contained in the project by opening the Default.aspx page. Automating the QUnit Tests across Multiple Browsers Automated tests are useless if no one ever runs them. In order for the QUnit tests to be useful, we needed an easy way to run the tests automatically against a matrix of browsers. We wanted to run the unit tests against Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, Firefox, Chrome, and Safari automatically. Expecting a developer to run QUnit tests against every browser after every check-in is just too much to expect. It takes 20 seconds to run the Accordion QUnit tests. We are testing against 8 browsers. That would require the developer to open 8 browsers and wait for the results after each change in code. Too much work. Therefore, we built a JavaScript Test Server. Our JavaScript Test Server project was inspired by John Resig’s TestSwarm project. The JavaScript Test Server runs our QUnit tests in a swarm of browsers (running on different operating systems) automatically. Here’s how the JavaScript Test Server works: 1. We created an ASP.NET page named RunTest.aspx that constantly polls the JavaScript Test Server for a new set of QUnit tests to run. After the RunTest.aspx page runs the QUnit tests, the RunTest.aspx records the test results back to the JavaScript Test Server. 2. We opened the RunTest.aspx page on instances of Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, FireFox, Chrome, Opera, Google, and Safari. Now that we have the JavaScript Test Server setup, we can run all of our QUnit tests against all of the browsers which we need to support with a single click of a button. A New Release of the Ajax Control Toolkit Each Month The Ajax Control Toolkit Issue Tracker contains over one thousand five hundred open issues and feature requests. So we have plenty of work on our plates J At CodePlex, anyone can vote for an issue to be fixed. Originally, we planned to fix issues in order of their votes. However, we quickly discovered that this approach was inefficient. Constantly switching back and forth between different controls was too time-consuming. It takes time to re-familiarize yourself with a control. Instead, we decided to focus on two or three controls each month and really focus on fixing the issues with those controls. This way, we can fix sets of related issues and avoid the randomization caused by context switching. Our team works in monthly sprints. We plan to do another release of the Ajax Control Toolkit each and every month. So far, we have competed one release of the Ajax Control Toolkit which was released on April 1, 2011. We plan to release a new version in early May. Conclusion Fortunately, I work with a team of smart developers. We currently have 5 developers working on the Ajax Control Toolkit (not full-time, they are also building two very cool ASP.NET MVC applications). All the developers who work on our team are required to have strong JavaScript, jQuery, and ASP.NET MVC skills. In the interest of being as transparent as possible about our work on the Ajax Control Toolkit, I plan to blog frequently about our team’s ongoing work. In my next blog entry, I plan to write about the two Ajax Control Toolkit controls which are the focus of our work for next release.

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • 2 Days of Share &amp; Point

    - by Mark Rackley
    Groovy man… SharePoint Saturday Ozarks is back for 2010, bigger and better than before. Join us for a far out time and learn more about SharePoint in one day than you could in a year from the man… Yes! SharePoint Saturday Ozarks is back! SharePoint Saturday Ozarks is the largest SharePoint conference in Arkansas, Southern Missouri, and the very north east tip of Oklahoma. Last year we had a great turn out with 20 speakers, 5 MVPs, and attendees coming from Arkansas, Texas, Oklahoma, Missouri, Kansas, Nebraska, Indiana, Ohio, Alabama, Michigan, and Washington. Hey Man… what’s SharePoint Saturday anyway? Sounds like a conspiracy man… Not to worry, SharePoint Saturday is not an arm of the government bent on mind control or any attempt what-so-ever to bring you down man. SharePoint Saturday is grass roots effort started by Michael Lotter (http://www.sharepointsaturday.org/pages/about.aspx). It is a FREE one day event where the best SharePoint speakers gather to present their love, hatred, and frustrations of SharePoint to those lucky individuals who attend. Lessons are learned, contacts are made, prizes are won, food is eaten, assorted beverages are consumed until wee hours of the morning. SharePoint Saturday started with just a few sporadic one day events here and there. However, over the past year SharePoint Saturday has exploded and it’s hard to find a weekend where there is NOT a SharePoint Saturday event happing in some corner of the globe. There are even occasions where there are two SharePoint Saturdays on the same day! Many people are pleasantly surprised at the caliber of speakers at these SharePoint Saturday events. For the most part, these speakers are more eloquent, practiced, and practical than those speakers you find at the major multi-day conferences. These guys aren’t even paid to speak.. they do it out of love man… SharePoint Saturday Ozarks 2009 Alumni We had a star studded cast last year with many returning this year! Just check out the fun that they had… John Ferringer – Admin rockstar… I can still sense the awesomeness   SharePoint poster children Mike Watson & Laura Rogers     Lori Gowin spreading the SharePoint Love Eric Shupps is a little bit country and a little bit rock and roll       Cathy Dew, Sean McDonough, and JD Wade relaxing between gigs Actually, you can see real photos from last year’s SharePoint Saturday ozarks here:  picasaweb.google.com/mrackley/SharePointSaturdayOzarks#    What’s new for SharePoint Saturday Ozarks 2010 SharePoint Saturday Ozarks 2010 will totally blow your mind man. We’re getting the band back to together with many returning speakers and few new faces. Joel Oleson will be speaking this year, maybe he’ll grace us with his song stylings. Sadly, once again, Andrew Connell will not be able to attend SharePoint Saturday Ozarks, however he did feel the need to show his support in his own way. Prizes this year currently include books, software, a Zune HD, and much more! Wait Man… You said 2 days? I thought it was a one day event? Correct you are my herbal smelling friend… SharePoint Saturday Ozarks 2010 will spread the love an additional day this year. The first day will be all about the SharePoint love, on day 2 we will be taking a leisurely float down the Buffalo National River for those interested in a truly unique experience (no banjos allowed please).   Here are the details: WHAT 4 – 5 hour float down the Buffalo National River WHEN & WHERE Sunday June 13th. We will be leaving at 10am from the Parking Lot of: Gordon’s Motel & Canoe Rental Old Highway 7 Jasper, AR 72641 (870) 446-5252 Jasper is about 30 minutes south of Harrison, AR on Highway 7 South. You are responsible for bumming a ride to/from Gordon’s Motel, but they will be shuttling us to/from the river and providing canoes and a boxed lunch. WHAT ELSE? The float trip is dependent on the weather of course, we won’t be floating down the river in a thunderstorm, however I planned SPS Ozarks around a time of year ideal for floating. We aren’t talking class 5 rapids here, you don’t need any real skill, but you need to be okay with possibly tipping your canoe over once or twice. You can bring your own assorted beverages with you, but glass containers are not allowed on the river. I suggest a small cooler with extra snacks and drinks. Also bring clothing you can get wet in (these SharePoint people can get ornery). HOW DO I SIGN UP? When you register for SharePoint Saturday Ozarks, you will have the option to also sign up for the float trip. Seats are limited though! If you do not intend to go, please do not take someone else’s place.  The cost for the float trip will be about $35 dollars per person (which you are responsible for unless we find a sponsor). The price includes shuttle to/from river, canoe, life jackets, paddles, and boxed lunch. Far out man… how do I register??? You can register for SharePoint Saturday Ozarks by going to http://spsozarks.eventbrite.com/ We are limited to 200 people for the conference and 50 people for the float trip, so register today before we are sold out. Lodging for SharePoint Saturday Ozarks will once again take place at the Hotel Seville: Annex Suites are available for $103.20 This is So Groovy.. How can I help? I’m glad you asked! We are still looking for a few sponsors and one or two more speakers. If you are interested please let me know!  You can find out more information at http://www.sharepointsaturday.org/ozarks Hey… wait a minute…. what exactly IS SharePoint man??? Come to SharePoint Saturday Ozarks and find out!!  See you guys there!

    Read the article

  • SQLAuthority News – Memories at Anniversary of SQL Wait Stats Book

    - by pinaldave
    SQL Wait Stats About a year ago, I had very proud moment. I had published my second book SQL Server Wait Stats with me as a primary author. It has been a long journey since then. The book got great response and it was widely accepted in the community. It was first of its kind of book written specifically on Wait Stats and Performance. The book was based on my earlier month long series written on the same subject SQL Server Wait Stats. Today, on the anniversary of the book, lots of things come to my mind let me share a few here. Idea behind Blog Series A very common question I often receive is why I wrote a 30 day series on Wait Stats. There were two reasons for it. 1) I have been working with SQL Server for a long time and have troubleshoot more than hundreds of SQL Server which are related to performance tuning. It was a great experience and it taught me a lot of new things. I always documented my experience. After a while I found that I was able to completely rely on my own notes when I was troubleshooting any servers. It is right then I decided to document my experience for the community. 2) While working with wait stats there were a few things, which I thought I knew it well as they were working. However, there was always a fear in the back of mind that what happens if what I believed was incorrect and I was on the wrong path all the time. There was only one way to get it validated. Put it out in front community with my understanding and request further help to improve my understanding. It worked, it worked beautifully. I received plenty of conversations, emails and comments. I refined my content based on various conversations and make it more relevant and near accurate. I guess above two are the major reasons for beginning my journey on writing Wait Stats blog series. Idea behind Book After writing a blog series there was a good amount of request I keep on receiving that I should convert it to eBook or proper book as reading blog posts is great but it goes not give a comprehensive understanding of the subject. The very common feedback from users who were beginning the subject that they will prefer to read it in a structured method. After hearing the feedback for more than 4 months, I decided to write a book based on the blog posts. When I envisioned book, I wanted to make sure this book addresses the wait stats concepts from the fundamentals and fill the gaps of blogs I wrote earlier. Rick Morelan and Joes 2 Pros Team I must acknowledge my co-author Rick Morelan for his unconditional support in writing this book. I had already authored one book before I published this book. The experience to write the book was out of the world. Writing blog posts are much much easier than writing books. The efforts it takes to write a book is 100 times more even though the content is ready. I could have not done it myself if there was not tremendous support of my co-author and editor’s team. We spend days and days researching and discussing various concepts covered in the book. When we were in doubt we reached out to experts as well did a practical reproduction of the scenarios to validate the concepts and claims. After continuous 3 months of hard work we were able to get this book out in the community. September 1st – the lucky day Well, we had to select any day to publish the books. When book was completed in August last week we felt very glad. We all had worked hard and having a sample draft book in hand was feeling like having a newborn baby in our hand. Every time my books are published I feel the same joy which I had when my daughter was born. The feeling of holding a new book in hand is the (almost) same feeling as holding newborn baby. I am excited. For me September 1st has been the luckiest day in mind life. My daughter Shaivi was born on September 1st. Since then every September first has been excellent day and have taken me to the next step in life. I believe anything and everything I do on September 1st it is turning out to be successful and blessed. Rick and I had finished a book in the last week of August. We sent it to the publisher (printer) and asked him to take the book live as soon as possible. We did not decide on any date as we wanted the book to get out as fast as it can. Interesting enough, the publisher/printer selected September 1st for publishing the book. He published the book on 1st September and I knew it at the same time that this book will go next level. Book Model – The Most Beautiful Girl We were done with book. We had no budget left for marketing. Rick and I had a long conversation regarding how to spread the words for the book so it can reach to many people. While we were talking about marketing Rick come up with the idea that we should hire a most beautiful girl around who acknowledge our book and genuinely care for book. It was a difficult task and Rick asked me to find a more beautiful girl. I am a father and the most beautiful girl for me my daughter. This was not a difficult task for me. Rick had given me task to find the most beautiful girl and I just could not think of anyone else than my own daughter. I still do not know what Rick thought about this idea but I had already made up my mind. You can see the detailed blog post here. The Fun Experiments Book Signing Event We had lots of fun moments along this book. We have given away more books to people for free than we have sold them actually. We had done book signing events, contests, and just plain give away when we found people can be benefited from this book. There was never an intention to make money and get rich. We just wanted that more and more people know about this new concept and learn from it. Today when I look back to the earnings there is nothing much we have earned if you talk about dollars. However the best reward which we have received is the satisfaction and love of community. The amount of emails, conversations we have so far received for this book is over thousands. We had fun writing this book, it was indeed a very satisfying journey. I have earned lots of friends while learning and exploring. Availability The book is one year old but still very relevant when it is about performance tuning. It is available at various online book stores. If you have read the book, do let me know what you think of it. Amazon | Kindle | Flipkart | Indiaplaza Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • TFS, G.I. Joe and Under-doing

    If I were to rank the most consistently irritating parts of my work day, using TFS would come in first by a wide margin. Even repeated network outages this week seem like a pleasant reprieve from this monolithic beast. This is not a reflexive anti-Microsoft feeling, that attitude just wouldnt work for a consultant who does .NET development. It is also not an utter dismissal of TFS as worthless; Ive seen people use it effectively on several projects. So why? Ill start with a laundry list of shortcomings. An out of the box UI for work items that is insultingly bad, a source control system that is confoundingly fragile when handling merges, folder renames and long file names, the arcane XML wizardry necessary to customize a template and a build system that adds an extra layer of oddness on top of msbuild. Im sure my legion of readers will soon point out to me how I can work around all these issues, how this is fixed in TFS 2010 or with this add-in, and how once you have everything set up, youre fine. And theyd be right, any one of these problems could be worked around. If not dirty laundry, what else? I thought about it for a while, and came to the conclusion that TFS is so irritating to me because it represents a vision of software development that I find unappealing. To expand upon this, lets start with some wisdom from those great PSAs at the end of the G.I. Joe cartoons of the 80s: Now you know, and knowing is half the battle. In software development, Id go further and say knowing is more than half the battle. Understanding the dimensions of the problem you are trying to solve, the needs of the users, the value that your software can provide are more than half the battle. Implementation of this understanding is not easy, but it is not even possible without this knowledge. Assuming we have a fixed amount of time and mental energy for any project, why does this spell trouble for TFS? If you think about what TFS is doing, its offering you a huge array of options to track the day to day implementation of your project. From tasks, to code churn, to test coverage. All valuable metrics, but only in exchange for valuable time to get it all working. In addition, when you have a shiny toy like TFS, the temptation is to feel obligated to use it. So the push from TFS is to encourage a project manager and team to focus on process and metrics around process. You can get great visibility, and graphs to show your project stakeholders, but none of that is important if you are not implementing the right product. Not just unimportant, these activities can be harmful as they drain your time and sap your creativity away from the rest of the project. To be more concrete, lets suppose your organization has invested the time to create a template for your projects and trained people in how to use it, so there is no longer a big investment of time for each project to get up and running. First, Id challenge if that template could be specific enough to be full featured and still applicable for any project. Second, the very existence of this template would be a indication to a project manager that the success of their project was somehow directly related to fitting management of that project into this format. Again, while the capabilities are wonderful, the mirage is there; just get everything into TFS and your project will run smoothly. Ill close the loop on this first topic by proposing a thought experiment. Think of the projects youve worked on. How many times have you been chagrined to discover youve implemented the wrong feature, misunderstood how a feature should work or just plain spent too much time on a screen that nobody uses? That sounds like a really worthwhile area to invest time in improving. How about going back to these projects and thinking about how many times you wished you had optimized the state change flow of your tasks or been embarrassed to not have a code churn report linked back to the latest changeset? With thanks to the Real American Heroes, Ill move on to a more current influence, that of the developers at 37signals, and their philosophy towards software development. This philosophy, fully detailed in the books Getting Real and Rework, is a vision of software that under does the competition. This is software that is deliberately limited in functionality in order to concentrate fully on making sure ever feature that is there is awesome and needed. Why is this relevant? Well, in one of those fun seeming paradoxes in life, constraints can be a spark for creativity. Think Twitter, the small screen of an iPhone, the limitations of HTML for applications, the low memory limits of older or embedded system. As long as there is some freedom within those constraints, amazing things emerge. For project management, some of the most respected people in the industry recommend using just index cards, pens and tape. They argue that with change the constant in software development, your process should be as limited (yet rigorous) as possible. Looking at TFS, this is not a system designed to under do anybody. It is a big jumble of components and options, with every feature you could think of. Predictably this means many basic functions are hard to use. For task management, many people just use an Excel spreadsheet linked up to TFS. Not a stirring endorsement of the tooling there. TFS as a whole would be far more appealing to me if there was less of it, but better. Id cut 50% of the features to make the other half really amaze and inspire me. And thats really the heart of the matter. TFS has great promise and I want to believe it can work better. But ultimately it focuses your attention on a lot of stuff that doesnt really matter and then clamps down your creativity in a mess of forms and dialogs obscuring what does.   --- Relevant Links --- All those great G.I. Joe PSAs are on YouTube, including lots of mashed up versions. A simple Google search will get you on the right track.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • South Florida Code Camp 2010 &ndash; VI &ndash; 2010-02-27

    - by Dave Noderer
    Catching up after our sixth code camp here in the Ft Lauderdale, FL area. Website at: http://www.fladotnet.com/codecamp. For the 5th time, DeVry University hosted the event which makes everything else really easy! Statistics from 2010 South Florida Code Camp: 848 registered (we use Microsoft Group Events) ~ 600 attended (516 took name badges) 64 speakers (including speaker idol) 72 sessions 12 parallel tracks Food 400 waters 600 sodas 900 cups of coffee (it was cold!) 200 pounds of ice 200 pizza's 10 large salad trays 900 mouse pads Photos on facebook Dave Noderer: http://www.facebook.com/home.php#!/album.php?aid=190812&id=693530361 Joe Healy: http://www.facebook.com/devfish?ref=mf#!/album.php?aid=202787&id=720054950 Will Strohl:http://www.facebook.com/home.php#!/album.php?aid=2045553&id=1046966128&ref=mf Veronica Gonzalez: http://www.facebook.com/home.php#!/album.php?aid=150954&id=672439484 Florida Speaker Idol One of the sessions at code camp was the South Florida Regional speaker idol competition. After user group level competitions there are five competitors. I acted as MC and score keeper while Ed Hill, Bob O’Connell, John Dunagan and Shervin Shakibi were judges. This statewide competition is being run by Roy Lawsen in Lakeland and the winner, Jeff Truman from Naples will move on to the state finals to be held at the Orlando Code Camp on 3/27/2010: http://www.orlandocodecamp.com/. Each speaker has 10 minutes. The participants were: Alex Koval Jeff Truman Jared Nielsen Chris Catto Venkat Narayanasamy They all did a great job and I’m working with each to make sure they don’t stop there and start speaking at meetings. Thanks to everyone involved! Volunteers As always events like this don’t happen without a lot of help! The key people were: Ed Hill, Bob O’Connell – DeVry For the months leading up to the event, Ed collects all of the swag, books, etc and stores them. He holds meeting with various DeVry departments to coordinate the day, he works with the students in the days  before code camp to stuff bags, print signs, arrange tables and visit BJ’s for our supplies (I go and pay but have a small car!). And of course the day of the event he is there at 5:30 am!! We took two SUV’s to BJ’s, i was really worried that the 36 cases of water were going to break his rear axle! He also helps with the students and works very hard before and after the event. Rainer Haberman – Speakers and Volunteer of the Year Rainer has helped over the past couple of years but this time he took full control of arranging the tracks. I did some preliminary work solicitation speakers but he took over all communications after that. We have tried various organizations around speakers, chair per track, central team but having someone paying attention to the details is definitely the way to go! This was the first year I did not have to jump in at the last minute and re-arrange everything. There were lots of kudo’s from the speakers too saying they felt it was more organized than they have experienced in the past from any code camp. Thanks Rainer! Ray Alamonte – Book Swap We saw the idea of a book swap from the Alabama Code Camp and thought we would give it a try. Ray jumped in and took control. The idea was to get people to bring their old technical books to swap or for others to buy. You got a ticket for each book you brought that you could then turn in to buy another book. If you did not have a ticket you could buy a book for $1. Net proceeds were $153 which I rounded up and donated to the Red Cross. There is plenty going on in Haiti and Chile! I don’t think we really got a count of how many books came in. I many cases the books barely hit the table before being picked up again. At the end we were left with a dozen books which we donated to the DeVry library. A great success we will definitely do again! Jace Weiss / Ratchelen Hut – Coffee and Snacks Wow, this was an eye opener. In past years a few of us would struggle to give some attention to coffee, snacks, etc. But it was always tenuous and always ended up running out of coffee. In the past we have tried buying Dunkin Donuts coffee, renting urns, borrowing urns, etc. This year I actually purchased 2 – 100 cup Westbend commercial brewers plus a couple of small urns (30 and 60 cup we used for decaf). We got them both started early (although i forgot to push the on button on one!) and primed it with 10 boxes of Joe from Dunkin. then Jace and Rachelen took over.. once a batch was brewed they would refill the boxes, keep the area clean and at one point were filling cups. We never ran out of coffee and served a few hundred more than last  year. We did look but next year I’ll get a large insulated (like gatorade) dispensing container. It all went very smoothly and having help focused on that one area was a big win. Thanks Jace and Rachelen! Ken & Shirley Golding / Roberta Barbosa – Registration Ken & Shirley showed up and took over registration. This year we printed small name tags for everyone registered which was great because it is much easier to remember someone’s name when they are labeled! In any case it went the smoothest it has ever gone. All three were actively pulling people through the registration, answering questions, directing them to bags and information very quickly. I did not see that there was too big a line at any time. Thanks!! Scott Katarincic / Vishal Shukla – Website For the 3rd?? year in a row, Scott was in charge of the website starting in August or September when I start on code camp. He handles all the requests, makes changes to the site and admin. I think two years ago he wrote all the backend administration and tunes it and the website a bit but things are pretty stable. The only thing I do is put up the sponsors. It is a big pressure off of me!! Thanks Scott! Vishal jumped into the web end this year and created a new Silverlight agenda page to replace the old ajax page. We will continue to enhance this but it is definitely a good step forward! Thanks! Alex Funkhouser – T-shirts/Mouse pads/tables/sponsors Alex helps in many areas. He helps me bring in sponsors and handles all the logistics for t-shirts, sponsor tables and this year the mouse pads. He is also a key person to help promote the event as well not to mention the after after party which I did not attend and don’t want to know much about! Students There were a number of student volunteers but don’t have all of their names. But thanks to them, they stuffed bags, patrolled pizza and helped with moving things around. Sponsors We had a bunch of great sponsors which allowed us to feed people and give a way a lot of great swag. Our major sponsors of DeVry, Microsoft (both DPE and UGSS), Infragistics, Telerik, SQL Share (End to End, SQL Saturdays), and Interclick are very much appreciated. The other sponsors Applied Innovations (also supply code camp hosting), Ultimate Software (a great local SW company), Linxter (reliable cloud messaging we are lucky to have here!), Mediascend (a media startup), SoftwareFX (another local SW company we are happy to have back participating in CC), CozyRoc (if you do SSIS, check them out), Arrow Design (local DNN and Silverlight experts),Boxes and Arrows (a local SW consulting company) and Robert Half. One thing we did this year besides a t-shirt was a mouse pad. I like it because it will be around for a long time on many desks. After much investigation and years of using mouse pad’s I’ve determined that the 1/8” fabric top is the best and that is what we got!   So now I get a break for a few months before starting again!

    Read the article

  • Finding a person in the forest

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved finding a person in the forest or Limiting the AD result in SharePoint People Picker There are times when we need to limit the SharePoint audience of certain farms or servers or site collections to a particular audience. One of my experiences involved limiting access to US citizens, another to a particular location. Now, most of us – your humble servant included – are not Active Directory experts – but we must be able to handle the “audience restrictions” as required. So here is how it’s done in a nutshell. Important note. Not all could be done in PowerShell (at least not yet)! There are no Windows PowerShell commands to configure People Picker. The stsadm command is: stsadm -o setproperty -pn peoplepicker-searchadcustomquery -pv ADQuery –url http://somethingOrOther Note the long-hyphenated property name. Now to filling the ADQuery.   LDAP Query in a nutshell Syntax LDAP is no older than SQL and an LDAP query is actually a query against the LDAP Database. LDAP attributes are the equivalent of Database columns, so why do we have to learn a new query language? Beats me! But we must, so here it is. The syntax of an LDAP query string is made of individual statements with relational operators including: = Equal <= Lower than or equal >= Greater than or equal… and memberOf – a group membership. ! Not * Wildcard Equal and memberOf are the most commonly used. Checking for absence uses the ! – not and the * - wildcard Example: (SN=Grant) All whose last name – SurName – is Grant Example: (!(SN=Grant)) All except Grant Example: (!(SN=*)) all where there is no SurName i.e SurName is absent (probably Rappers). Example: (CN=MyGroup) Common Name is MyGroup.  Example: (GN=J*) all the Given Names that start with J (JJ, Jane, Jon, John, etc.) The cryptic SN, CN, GN, etc. are attributes and more about them later All the queries are enclosed in parentheses (Query). Complex queries are comprised of sets that are in AND or OR conditions. AND is denoted by the ampersand (&) and the OR is denoted by the vertical pipe (|). The general syntax is that of the Prefix polish notation where the operand precedes the variables. E.g +ab is the sum of a and b. In an LDAP query (&(A)(B)) will garner the objects for which both A and B are true. In an LDAP query (&(A)(B)(C)) will garner the objects for which A, B and C are true. There’s no limit to the number of conditions. In an LDAP query (|(A)(B)) will garner the objects for which either A or B are true. In an LDAP query (|(A)(B)(C)) will garner the objects for which at least one of A, B and C is true. There’s no limit to the number of conditions. More complex queries have both types of conditions and the parentheses determine the order of operations. Attributes Now let’s get into the SN, CN, GN, and other attributes of the query SN – is the SurName (last name) GN – is the Given Name (first name) CN – is the Common Name, usually GN followed by SN OU – is an Organization Unit such as division, department etc. DC – is a Domain Content in the AD forest l – lower case ‘L’ stands for location. Jerusalem anybody? Or Katmandu. UPN – User Principal Name, is usually the first part of an email address. By nature it is unique in the forest. Most systems set the UPN to be the first initial followed by the SN of the person involved. Some limit the total to 8 characters. If we have many ‘jsmith’ we have to somehow distinguish them from each other. DN – is the distinguished name – a name unique to AD forest in which it lives. Usually it’s a CN with some domain or group distinguishers. DN is important in conjunction with the memberOf relation. Groups have stricter requirement. Each group has to have a unique name - its CN and it has to be unique regardless of its place. See more below. All of the attributes are case insensitive. CN, cn, Cn, and cN are identical. objectCategory is an element that requires special consideration. AD contains many different object like computers, printers, and of course people and groups. In the queries below, we’re limiting our search to people (person). Putting it altogether Let’s get a list of all the Johns in the SPAdmin group of the Jerusalem that local domain. (&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local)) The memberOf=cn=SPAdmin uses the cn (Common Name) of the SPAdmin group. This is how the memberOf relation is used. ‘SPAdmin’ is actually the DN of the group. Also the memberOf relation does not allow wild cards (*) in the group name. Also, you are limited to at most one ‘OU’ entry. Let’s add Marvin Minsky to the search above. |(&(objectCategory=person)(memberOf=cn=SPAdmin,ou=Jerusalem,dc=local))(CN=Marvin Minsky) Here I added the or pipeline at the beginning of the query and put the CN requirement for Minsky at the end. Note that if Marvin was already in the prior result, he’s not going to be listed twice. One last note: You may see a dryer but more complete list of attributes rules and examples in: http://www.tek-tips.com/faqs.cfm?fid=5667 And finally (thus negating the claim that my previous note was last), to the best of my knowledge there are 3 more ways to limit the audience. One is to use the peoplepicker-searchadcustomfilter property using the same ADQuery. This works only in SP1 and above. The second is to limit the search to users within this particular site collection – the property name is peoplepicker-onlysearchwithinsitecollection and the value is yes (-pv yes) And the third is –pn peoplepicker-serviceaccountdirectorypaths –pv “OU=ou1,DC=dc1…..” Again you are limited to at most one ‘OU’ phrase – no OU=ou1,OU=ou2… And now the real end. The main property discussed in this sprawling and seemingly endless monogram – peoplepicker-searchadcustomquery - is the most general way of getting the job done. Here are a few examples of command lines that worked and some that didn’t. Can you see why? C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (!Title=David) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,OU=OUMid,OU=OUTop,DC=TopDC,DC=MidDC,DC=BottomDC) Command line error. Too many OUs C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully. C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\BIN>stsa dm -o setproperty -url http://somethingOrOther -pn peoplepicker-searchadcustomfi lter -pv (OU=OURealName,DC=TopDC,DC=MidDC,DC=BottomDC) Operation completed successfully.   That’s all folks!

    Read the article

  • Rockmelt, the technology adoption model, and Facebook's spare internet

    - by Roger Hart
    Regardless of how good it is, you'd have to have a heart of stone not to make snide remarks about Rockmelt. After all, on the surface it looks a lot like some people spent two years building a browser instead of just bashing out a Chrome extension over a wet weekend. It probably does some more stuff. I don't know for sure because artificial scarcity is cool, apparently, so the "invitation" is still in the post*. I may in fact never know for sure, because I'm not wild about Facebook sign-in as a prerequisite for anything. From the video, and some initial reviews, my early reaction was: I have a browser, I have a Twitter client; what on earth is this for? The answer, of course, is "not me". Rockmelt is, in a way, quite audacious. Oh, sure, on launch day it's Bay Area bar-chat for the kids with no lenses in their retro specs and trousers that give you deep-vein thrombosis, but it's not really about them. Likewise,  Facebook just launched Google Wave, or something. And all the tech snobbery and scorn packed into describing it that way is irrelevant next to what they're doing with their platform. Here's something I drew in MS Paint** because I don't want to get sued: (see: The technology adoption lifecycle) A while ago in the Guardian, John Lanchester dusted off the idiom that "technology is stuff that doesn't work yet". The rest of the article would be quite interesting if it wasn't largely about MySpace, and he's sort of got a point. If you bolt on the sentiment that risk-averse businessmen like things that work, you've got the essence of Crossing the Chasm. Products for the mainstream market don't look much like technology. Think for  a second about early (1980s ish) hi-fi systems, with all the knobs and fiddly bits, their ostentatious technophile aesthetic. Then consider their sleeker and less (or at least less conspicuously) functional successors in the 1990s. The theory goes that innovators and early adopters like technology, it's a hobby in itself. The rest of the humans seem to like magic boxes with very few buttons that make stuff happen and never trouble them about why. Personally, I consider Apple's maddening insistence that iTunes is an acceptable way to move files around to be more or less morally unacceptable. Most people couldn't care less. Hence Rockmelt, and hence Facebook's continued growth. Rockmelt looks pointless to me, because I aggregate my social gubbins with Digsby, or use TweetDeck. But my use case is different and so are my enthusiasms. If I want to share photos, I'll use Flickr - but Facebook has photo sharing. If I want a short broadcast message, I'll use Twitter - Facebook has status updates. If I want to sell something with relatively little hassle, there's eBay - or Facebook marketplace. YouTube - check, FB Video. Email - messaging. Calendaring apps, yeah there are loads, or FB Events. What if I want to host a simple web page? Sure, they've got pages. Also Notes for blogging, and more games than I can count. This stuff is right there, where millions and millions of users are already, and for what they need it just works. It's not about me, because I'm not in the big juicy area under the curve. It's what 1990s portal sites could never have dreamed of achieving. Facebook is AOL on speed, crack, and some designer drugs it had specially imported from the future. It's a n00b-friendly gateway to the internet that just happens to serve up all the things you want to do online, right where you are. Oh, and everybody else is there too. The price of having all this and the social graph too is that you have all of this, and the social graph too. But plenty of folks have more incisive things to say than me about the whole privacy shebang, and it's not really what I'm talking about. Facebook is maintaining a vast, and fairly fully-featured training-wheels internet. And it makes up a large proportion of the online experience for a lot of people***. It's the entire web (2.0?) experience for the early and late majority. And sure, no individual bit of it is quite as slick or as fully-realised as something like Flickr (which wows me a bit every time I use it. Those guys are good at the web), but it doesn't have to be. It has to be unobtrusively good enough for the regular humans. It has to not feel like technology. This is what Rockmelt sort of is. You're online, you want something nebulously social, and you don't want to faff about with, say, Twitter clients. Wow! There it is on a really distracting sidebar, right in your browser. No effort! Yeah - fish nor fowl, much? It might work, I guess. There may be a demographic who want their social web experience more simply than tech tinkering, and who aren't just getting it from Facebook (or, for that matter, mobile devices). But I'd be surprised. Rockmelt feels like an attempt to grab a slice of Facebook-style "Look! It's right here, where you already are!", but it's still asking the mature market to install a new browser. Presumably this is where that Facebook sign-in predicate comes in handy, though it'll take some potent awareness marketing to make it fly. Meanwhile, Facebook quietly has the entire rest of the internet as a product management resource, and can continue to give most of the people most of what they want. Something that has not gone un-noticed in its potential to look a little sinister. But heck, they might even make Google Wave popular.     *This was true last week when I drafted this post. I got an invite subsequently, hence the screenshot. **MS Paint is no fun any more. It's actually good in Windows 7. Farewell ironically-shonky diagrams. *** It's also behind a single sign-in, lending a veneer of confidence, and partially solving the problem of usernames being crummy unique identifiers. I'll be blogging about that at some point.

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by jatin.thaker
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; John Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and John Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. John Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patanjali Venkatacharya enjoyed the Sardinian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • Reg Gets a Job at Red Gate (and what happens behind the scenes)

    - by red(at)work
    Mr Reg Gater works at one of Cambridge’s many high-tech companies. He doesn’t love his job, but he puts up with it because... well, it could be worse. Every day he drives to work around the Red Gate roundabout, wondering what his boss is going to blame him for today, and wondering if there could be a better job out there for him. By late morning he already feels like handing his notice in. He got the hacky look from his boss for being 5 minutes late, and then they ran out of tea. Again. He goes to the local sandwich shop for lunch, and picks up a Red Gate job menu and a Book of Red Gate while he’s waiting for his order. That night, he goes along to Cambridge Geek Nights and sees some very enthusiastic Red Gaters talking about the work they do; it sounds interesting and, of all things, fun. He takes a quick look at the job vacancies on the Red Gate website, and an hour later realises he’s still there – looking at videos, photos and people profiles. He especially likes the Red Gate’s Got Talent page, and is very impressed with Simon Johnson’s marathon time. He thinks that he’d quite like to work with such awesome people. It just so happens that Red Gate recently decided that they wanted to hire another hot shot team member. Behind the scenes, the wheels were set in motion: the recruitment team met with the hiring manager to understand exactly what they’re looking for, and to decide what interview tests to do, who will do the interviews, and to kick-start any interview training those people might need. Next up, a job description and job advert were written, and the job was put on the market. Reg applies, and his CV lands in the Recruitment team’s inbox and they open it up with eager anticipation that Reg could be the next awesome new starter. He looks good, and in a jiffy they’ve arranged an interview. Reg arrives for his interview, and is greeted by a smiley receptionist. She offers him a selection of drinks and he feels instantly relaxed. A couple of interviews and an assessment later, he gets a job offer. We make his day and he makes ours by accepting, and becoming one of the 60 new starters so far this year. Behind the scenes, things start moving all over again. The HR team arranges for a “Welcome” goodie box to be whisked out to him, prepares his contract, sends an email to Information Services (Or IS for short - we’ll come back to them), keeps in touch with Reg to make sure he knows what to expect on his first day, and of course asks him to fill in the all-important wiki questionnaire so his new colleagues can start to get to know him before he even joins. Meanwhile, the IS team see an email in SupportWorks from HR. They see that Reg will be starting in the sales team in a few days’ time, and they know exactly what to do. They pull out a new machine, and within minutes have used their automated deployment software to install every piece of software that a new recruit could ever need. They also check with Reg’s new manager to see if he has any special requirements that they could help with. Reg starts and is amazed to find a fully configured machine sitting on his desk, complete with stationery and all the other tools he’ll need to do his job. He feels even more cared for after he gets a workstation assessment, and realises he’d be comfier with an ergonomic keyboard and a footstool. They arrive minutes later, just like that. His manager starts him off on his induction and sales training. Along with job-specific training, he’ll also have a buddy to help him find his feet, and loads of pre-arranged demos and introductions. Reg settles in nicely, and is great at his job. He enjoys the canteen, and regularly eats one of the 40,000 meals provided each year. He gets used to the selection of teas that are available, develops a taste for champagne launch parties, and has his fair share of the 25,000 cups of coffee downed at Red Gate towers each year. He goes along to some Feel Good Fund events, and donates a little something to charity in exchange for a turn on the chocolate fountain. He’s looking a little scruffy, so he decides to get his hair cut in between meetings, just in time for the Red Gate birthday company photo. Reg starts a new project: identifying existing customers to up-sell to new bundles. He talks with the web team to generate lists of qualifying customers who haven’t recently been sent marketing emails, and sends emails out, using a new in-house developed tool to schedule follow-up calls in CRM for the same group. The customer responds, saying they’d like to upgrade but are having a licensing problem – Reg sends the issue to Support, and it gets routed to the web team. The team identifies a workaround, and the bug gets scheduled into the next maintenance release in a fortnight’s time (hey; they got lucky). With all the new stuff Reg is working on, he realises that he’d be way more efficient if he had a third monitor. He speaks to IS and they get him one - no argument. He also needs a test machine and then some extra memory. Done. He then thinks he needs an iPad, and goes to ask for one. He gets told to stop pushing his luck. Some time later, Reg’s wife has a baby, so Reg gets 2 weeks of paid paternity leave and a bunch of flowers sent to his house. He signs up to the childcare scheme so that he doesn’t have to pay National Insurance on the first £243 of his childcare. The accounts team makes it all happen seamlessly, as they did with his Give As You Earn payments, which come out of his wages and go straight to his favorite charity. Reg’s sales career is going well. He’s grateful for the help that he gets from the product support team. How do they answer all those 900-ish support calls so effortlessly each month? He’s impressed with the patches that are sent out to customers who find “interesting behavior” in their tools, and to the customers who just must have that new feature. A little later in his career at Red Gate, Reg decides that he’d like to learn about management. He goes on some management training specially customised for Red Gate, joins the Management Book Club, and gets together with other new managers to brainstorm how to get the most out of one to one meetings with his team. Reg decides to go for a game of Foosball to celebrate his good fortune with his team, and has to wait for Finance to finish. While he’s waiting, he reflects on the wonderful time he’s had at Red Gate. He can’t put his finger on what it is exactly, but he knows he’s on to a good thing. All of the stuff that happened to Reg didn’t just happen magically. We’ve got teams of people working relentlessly behind the scenes to make sure that everyone here is comfortable, safe, well fed and caffeinated to the max.

    Read the article

  • Pet Peeves with the Windows Phone 7 Marketplace

    - by Bil Simser
    Have you ever noticed how something things just gnaw at your very being. This is the case with the WP7 marketplace, the Zune software, and the things that drive me batshit crazy with a side of fries. To go. I wanted to share. XBox Live is Not the Centre of the Universe Okay, it’s fine that the Zune software has an XBox live tag for games so can see them clearly but do we really need to have it shoved down our throats. On every click? Click on Games in the marketplace: The first thing that it defaults to on the filters on the right is XBox Live: Okay. Fine. However if you change it (say to Paid) then click onto a title when you come back from that title is the filter still set to Paid? No. It’s back to XBox Live again. Really? Give us a break. If you change to any filter on any other genre then click on the selected title, it doesn’t revert back to anything. It stays on the selection you picked. Let’s be fair here. The Games genre should behave just like every other one. If I pick Paid then when I come back to the list please remember that. Double Dipping On the subject of XBox Live titles, Microsoft (and developers who have an agreement with Microsoft to produce Live titles, which generally rules out indie game developers) is double dipping with regards to exposure of their titles. Here’s the Puzzle and Trivia Game section on the Marketplace for XBox Live titles: And here’s the same category filtered on Paid titles: See the problem? Two indie titles while the rest are XBox Live ones. So while XBL has it’s filter, they also get to showcase their wares in the Paid and Free filters as well. If you’re going to have an XBox Live filter then use it and stop pushing down indie titles until they’re off the screen (on some genres this is already the case). Free and Paid titles should be just that and not include XBox Live ones. If you’re really stoked that people can’t find the Free XBox Live titles vs. the paid ones, then create a Free XBox Live filter and a Paid XBox Live filter. I don’t think we would mind much. Whose Trial is it Anyways? You might notice apps in the marketplace with titles like “My Fart App Professional Lite” or “Silicon Lamb Spleen Builder Free”. When you submit and app to the marketplace it can either be free or paid. If it’s a paid app you also have the option to submit it with Trial capabilities. It’s up to you to decide what you offer in the trial version but trial versions can be purchased from within the app so after someone trys out your app (for free) and wants to unlock the Super Secret Obama Spy Ring Level, they can just go to the marketplace from your app (if you built that functionality in) and upgrade to the paid version. However it creates a rift of sorts when it comes to visibility. Some developers go the route of the paid app with a trial version, others decide to submit *two* apps instead of one. One app is the “Free” or “Lite” verions and the other is the paid version. Why go to the hassle of submitting two apps when you can just create a trial version in the same app? Again, visibility. There’s no way to tell Paid apps with Trial versions and ones without (it’s an option, you don’t have to provide trial versions, although I think it’s a good idea). However there is a way to see the Free apps from the Paid ones so some submit the two apps and have the Free version have links to buy the paid one (again through the Marketplace tasks in the API). What we as developers need for visibility is a new filter. Trial. That’s it. It would simply filter on Paid apps that have trial capabilities and surface up those apps just like the free ones. If Microsoft added this filter to the marketplace, it would eliminate the need for people to submit their “Free” and “Lite” versions and make it easier for the developer not to have to maintain two systems. I mean, is it really that hard? Can’t be any more difficult than the XBox Live Filter that’s already there. Location is Everything The last thing on my bucket list is about location. When I launch Zune I’m running in my native location setting, Canada. What’s great is that I navigate to the Travel Tools section where I have one of my apps and behold the splendour that I see: There are my apps in the number 1 and number 4 slot for top selling in that category. I show it to my wife to make up for the sleepless nights writing this stuff and we dance around and celebrate. Then I change my location on my operation system to United States and re-launch Zune. WTF? My flight app has slipped to the 10th spot (I’m only showing 4 across here out of the 7 in Zune) and my border check app that was #1 is now in the 32nd spot! End of celebration. Not only is relevance being looked at here, I value the comments people make on may apps as do most developers. I want to respond to them and show them that I’m listening. The next version of my border app will provide multiple camera angles. However when I’m running in my native Canada location, I only see two reviews. Changing over to United States I see fourteen! While there are tools out there to provide with you a unified view, I shouldn’t have to rely on them. My own Zune desktop software should allow me to see everything. I realize that some developers will submit an app and only target it for some locations and that’s their choice. However I shouldn’t have to jump through hoops to see what apps are ahead of mine, or see people comments and ratings. Another proposal. Either unify the marketplace (i.e. when I’m looking at it show me everything combined) or let me choose a filter. I think the first option might be difficult as you’re trying to average out top selling apps across all markets and have to deal with some apps that have been omitted from some markets. Although I think you could come up with a set of use cases that would handle that, maybe that’s too much work. At the very least, let us developers view the markets in a drop down or something from within the Zune desktop. Having to shut down Zune, change our location, and re-launch Zune to see other perspectives is just too onerous. A Call to Action These are just one mans opinion. Do you agree? Disagree? Feel hungry for a bacon sandwich? Let everyone know via the comments below. Perhaps someone from Microsoft will be reading and take some of these ideas under advisement. Maybe not, but at least let’s get the word out that we really want to see some change. Egypt can do it, why not WP7 developers!

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by Applications User Experience
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; Jonathan Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and Jonathan Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. Jonathan Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patnajali Venkatacharya enjoyed the Sardian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Routes on a sphere surface - Find geodesic?

    - by CaNNaDaRk
    I'm working with some friends on a browser based game where people can move on a 2D map. It's been almost 7 years and still people play this game so we are thinking of a way to give them something new. Since then the game map was a limited plane and people could move from (0, 0) to (MAX_X, MAX_Y) in quantized X and Y increments (just imagine it as a big chessboard). We believe it's time to give it another dimension so, just a couple of weeks ago, we began to wonder how the game could look with other mappings: Unlimited plane with continous movement: this could be a step forward but still i'm not convinced. Toroidal World (continous or quantized movement): sincerely I worked with torus before but this time I want something more... Spherical world with continous movement: this would be great! What we want Users browsers are given a list of coordinates like (latitude, longitude) for each object on the spherical surface map; browsers must then show this in user's screen rendering them inside a web element (canvas maybe? this is not a problem). When people click on the plane we convert the (mouseX, mouseY) to (lat, lng) and send it to the server which has to compute a route between current user's position to the clicked point. What we have We began writing a Java library with many useful maths to work with Rotation Matrices, Quaternions, Euler Angles, Translations, etc. We put it all together and created a program that generates sphere points, renders them and show them to the user inside a JPanel. We managed to catch clicks and translate them to spherical coords and to provide some other useful features like view rotation, scale, translation etc. What we have now is like a little (very little indeed) engine that simulates client and server interaction. Client side shows points on the screen and catches other interactions, server side renders the view and does other calculus like interpolating the route between current position and clicked point. Where is the problem? Obviously we want to have the shortest path to interpolate between the two route points. We use quaternions to interpolate between two points on the surface of the sphere and this seemed to work fine until i noticed that we weren't getting the shortest path on the sphere surface: We though the problem was that the route is calculated as the sum of two rotations about X and Y axis. So we changed the way we calculate the destination quaternion: We get the third angle (the first is latitude, the second is longitude, the third is the rotation about the vector which points toward our current position) which we called orientation. Now that we have the "orientation" angle we rotate Z axis and then use the result vector as the rotation axis for the destination quaternion (you can see the rotation axis in grey): What we got is the correct route (you can see it lays on a great circle), but we get to this ONLY if the starting route point is at latitude, longitude (0, 0) which means the starting vector is (sphereRadius, 0, 0). With the previous version (image 1) we don't get a good result even when startin point is 0, 0, so i think we're moving towards a solution, but the procedure we follow to get this route is a little "strange" maybe? In the following image you get a view of the problem we get when starting point is not (0, 0), as you can see starting point is not the (sphereRadius, 0, 0) vector, and as you can see the destination point (which is correctly drawn!) is not on the route. The magenta point (the one which lays on the route) is the route's ending point rotated about the center of the sphere of (-startLatitude, 0, -startLongitude). This means that if i calculate a rotation matrix and apply it to every point on the route maybe i'll get the real route, but I start to think that there's a better way to do this. Maybe I should try to get the plane through the center of the sphere and the route points, intersect it with the sphere and get the geodesic? But how? Sorry for being way too verbose and maybe for incorrect English but this thing is blowing my mind! EDIT: This code version is related to the first image: public void setRouteStart(double lat, double lng) { EulerAngles tmp = new EulerAngles ( Math.toRadians(lat), 0, -Math.toRadians(lng)); //set route start Quaternion qtStart.setInertialToObject(tmp); //do other stuff like drawing start point... } public void impostaDestinazione(double lat, double lng) { EulerAngles tmp = new AngoliEulero( Math.toRadians(lat), 0, -Math.toRadians(lng)); qtEnd.setInertialToObject(tmp); //do other stuff like drawing dest point... } public V3D interpolate(double totalTime, double t) { double _t = t/totalTime; Quaternion q = Quaternion.Slerp(qtStart, qtEnd, _t); RotationMatrix.inertialQuatToIObject(q); V3D p = matInt.inertialToObject(V3D.Xaxis.scale(sphereRadius)); //other stuff, like drawing point ... return p; } //mostly taken from a book! public static Quaternion Slerp(Quaternion q0, Quaternion q1, double t) { double cosO = q0.dot(q1); double q1w = q1.w; double q1x = q1.x; double q1y = q1.y; double q1z = q1.z; if (cosO < 0.0f) { q1w = -q1w; q1x = -q1x; q1y = -q1y; q1z = -q1z; cosO = -cosO; } double sinO = Math.sqrt(1.0f - cosO*cosO); double O = Math.atan2(sinO, cosO); double oneOverSinO = 1.0f / senoOmega; k0 = Math.sin((1.0f - t) * O) * oneOverSinO; k1 = Math.sin(t * O) * oneOverSinO; // Interpolate return new Quaternion( k0*q0.w + k1*q1w, k0*q0.x + k1*q1x, k0*q0.y + k1*q1y, k0*q0.z + k1*q1z ); } A little dump of what i get (again check image 1): Route info: Sphere radius and center: 200,000, (0.0, 0.0, 0.0) Route start: lat 0,000 °, lng 0,000 ° @v: (200,000, 0,000, 0,000), |v| = 200,000 Route end: lat 30,000 °, lng 30,000 ° @v: (150,000, 86,603, 100,000), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (1,000, 0,000, -0,000, 0,000); 0,000 °; (1,000, 0,000, 0,000) Qt end: (0,933, 0,067, -0,250, 0,250); 42,181 °; (0,186, -0,695, 0,695) Route start: lat 30,000 °, lng 10,000 ° @v: (170,574, 30,077, 100,000), |v| = 200,000 Route end: lat 80,000 °, lng -50,000 ° @v: (22,324, -26,604, 196,962), |v| = 200,000 Qt dump: (w, x, y, z), rot. angle°, (x, y, z) rot. axis Qt start: (0,962, 0,023, -0,258, 0,084); 31,586 °; (0,083, -0,947, 0,309) Qt end: (0,694, -0,272, -0,583, -0,324); 92,062 °; (-0,377, -0,809, -0,450)

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >