Search Results

Search found 4596 results on 184 pages for 'moderately interesting'.

Page 169/184 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • C# 'could not found' existing method

    - by shybovycha
    Greetings! I've been fooling around (a bit) with C# and its assemblies. And so i've found such an interesting feature as dynamic loading assemblies and invoking its class members. A bit of google and here i am, writing some kind of 'assembly explorer'. (i've used some portions of code from here, here and here and none of 'em gave any of expected results). But i've found a small bug: when i tried to invoke class method from assembly i've loaded, application raised MissingMethod exception. I'm sure DLL i'm loading contains class and method i'm tryin' to invoke (my app ensures me as well as RedGate's .NET Reflector): The main application code seems to be okay and i start thinking if i was wrong with my DLL... Ah, and i've put both of projects into one solution, but i don't think it may cause any troubles. And yes, DLL project has 'class library' target while the main application one has 'console applcation' target. So, the question is: what's wrong with 'em? Here are some source code: DLL source: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ClassLibrary1 { public class Class1 { public void Main() { System.Console.WriteLine("Hello, World!"); } } } Main application source: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Assembly asm = Assembly.LoadFrom(@"a\long\long\path\ClassLibrary1.dll"); try { foreach (Type t in asm.GetTypes()) { if (t.IsClass == true && t.FullName.EndsWith(".Class1")) { object obj = Activator.CreateInstance(t); object res = t.InvokeMember("Main", BindingFlags.Default | BindingFlags.InvokeMethod, null, obj, null); // Exception is risen from here } } } catch (Exception e) { System.Console.WriteLine("Error: {0}", e.Message); } System.Console.ReadKey(); } } } UPD: worked for one case - when DLL method takes no arguments: DLL class (also works if method is not static): public class Class1 { public static void Main() { System.Console.WriteLine("Hello, World!"); } } Method invoke code: object res = t.InvokeMember("Main", BindingFlags.Default | BindingFlags.InvokeMethod, null, null, null);

    Read the article

  • Hibernate/Spring: getHibernateTemplate().save(...) Freezes/Hangs

    - by ashes999
    I'm using Hibernate and Spring with the DAO pattern (all Hibernate dependencies in a *DAO.java class). I have nine unit tests (JUnit) which create some business objects, save them, and perform operations on them; the objects are in a hash (so I'm reusing the same objects all the time). My JUnit setup method calls my DAO.deleteAllObjects() method which calls getSession().createSQLQuery("DELETE FROM <tablename>").executeUpdate() for my business object table (just one). One of my unit tests (#8/9) freezes. I presumed it was a database deadlock, because the Hibernate log file shows my delete statement last. However, debugging showed that it's simply HibernateTemplate.save(someObject) that's freezing. (Eclipse shows that it's freezing on HibernateTemplate.save(Object), line 694.) Also interesting to note is that running this test by itself (not in the suite of 9 tests) doesn't cause any problems. How on earth do I troubleshoot and fix this? Also, I'm using @Entity annotations, if that matters. Edit: I removed reuse of my business objects (use unique objects in every method) -- didn't make a difference (still freezes). Edit: This started trickling into other tests, too (can't run more than one test class without getting something freezing) Transaction configuration: <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource" /> </bean> <tx:advice id="txAdvice" transaction-manager="txManager"> <!-- the transactional semantics... --> <tx:attributes> <!-- all methods starting with 'get' are read-only --> <tx:method name="get*" read-only="true" /> <tx:method name="find*" read-only="true" /> <!-- other methods use the default transaction settings (see below) --> <tx:method name="*" /> </tx:attributes> </tx:advice> <!-- my bean which is exhibiting the hanging behavior --> <aop:config> <aop:pointcut id="beanNameHere" expression="execution(* com.blah.blah.IMyDAO.*(..))" /> <aop:advisor advice-ref="txAdvice" pointcut-ref="beanNameHere" /> </aop:config>

    Read the article

  • Is it possible to create a C++ factory system that can create an instance of any "registered" object

    - by chrensli
    Hello, I've spent my entire day researching this topic, so it is with some scattered knowledge on the topic that i come to you with this inquiry. Please allow me to describe what I am attempting to accomplish, and maybe you can either suggest a solution to the immediate question, or another way to tackle the problem entirely. I am trying to mimic something related to how XAML files work in WPF, where you are essentially instantiating an object tree from an XML definition. If this is incorrect, please inform. This issue is otherwise unrelated to WPF, C#, or anything managed - I solely mention it because it is a similar concept.. So, I've created an XML parser class already, and generated a node tree based on ObjectNode objects. ObjectNode objects hold a string value called type, and they have an std::vector of child ObjectNode objects. The next step is to instantiate a tree of objects based on the data in the ObjectNode tree. This intermediate ObjectNode tree is needed because the same ObjectNode tree might be instantiated multiple times or delayed as needed. The tree of objects that is being created is such that the nodes in the tree are descendants of a common base class, which for now we can refer to as MyBase. Leaf nodes can be of any type, not necessarily derived from MyBase. To make this more challenging, I will not know what types of MyBase derived objects might be involved, so I need to allow for new types to be registered with the factory. I am aware of boost's factory. Their docs have an interesting little design paragraph on this page: o We may want a factory that takes some arguments that are forwarded to the constructor, o we will probably want to use smart pointers, o we may want several member functions to create different kinds of objects, o we might not necessarily need a polymorphic base class for the objects, o as we will see, we do not need a factory base class at all, o we might want to just call the constructor - without #new# to create an object on the stack, and o finally we might want to use customized memory management. I might not be understanding this all correctly, but that seems to state that what I'm trying to do can be accomplished with boost's factory. But all the examples I've located, seem to describe factories where all objects are derived from a base type. Any guidance on this would be greatly appreciated. Thanks for your time!

    Read the article

  • Why is Dictionary.First() so slow?

    - by Rotsor
    Not a real question because I already found out the answer, but still interesting thing. I always thought that hash table is the fastest associative container if you hash properly. However, the following code is terribly slow. It executes only about 1 million iterations and takes more than 2 minutes of time on a Core 2 CPU. The code does the following: it maintains the collection todo of items it needs to process. At each iteration it takes an item from this collection (doesn't matter which item), deletes it, processes it if it wasn't processed (possibly adding more items to process), and repeats this until there are no items to process. The culprit seems to be the Dictionary.Keys.First() operation. The question is why is it slow? Stopwatch watch = new Stopwatch(); watch.Start(); HashSet<int> processed = new HashSet<int>(); Dictionary<int, int> todo = new Dictionary<int, int>(); todo.Add(1, 1); int iterations = 0; int limit = 500000; while (todo.Count > 0) { iterations++; var key = todo.Keys.First(); var value = todo[key]; todo.Remove(key); if (!processed.Contains(key)) { processed.Add(key); // process item here if (key < limit) { todo[key + 13] = value + 1; todo[key + 7] = value + 1; } // doesn't matter much how } } Console.WriteLine("Iterations: {0}; Time: {1}.", iterations, watch.Elapsed); This results in: Iterations: 923007; Time: 00:02:09.8414388. Simply changing Dictionary to SortedDictionary yields: Iterations: 499976; Time: 00:00:00.4451514. 300 times faster while having only 2 times less iterations. The same happens in java. Used HashMap instead of Dictionary and keySet().iterator().next() instead of Keys.First().

    Read the article

  • C++ dynamic type construction and detection

    - by KneLL
    There was an interesting problem in C++, but it concerns more likely architecture. There are many (10, 20, 40, etc) classes that describe some characteristics (mix-in classes), for exmaple: struct Base { virtual ~Base() {} }; struct A : virtual public Base { int size; }; struct B : virtual public Base { float x, y; }; struct C : virtual public Base { bool some_bool_state; }; struct D : virtual public Base { string str; } // .... Primary module declares and exports a function (for simplicity just function declarations without classes): // .h file void operate(Base *pBase); // .cpp file void operate(Base *pBase) { // .... } Any other module can has a code like this: #include "mixins.h" #include "primary.h" class obj1_t : public A, public C, public D {}; class obj2_t : public B, public D {}; // ... void Pass() { obj1_t obj1; obj2_t obj2; operate(&obj1); operate(&obj2); } The question is how to know what the real type of given object in operate() without dynamic_cast and any type information in classes (constants, etc)? Function operate() is used with big array of objects in small time periods and dynamic_cast is too slow for it. And I don't want to include constants (enum obj_type { ... }) because this is not OOP-way. // module operate.cpp void some_operate(Base *pBase) { processA(pBase); processB(pBase); } void processA(A *pA) { } void processB(B *pB) { } I cannot directly pass a pBase to these functions. And it's impossible to have all possible combinations of classes, because I can add new classes just by including new .h files. As one of solutions that comed to mind, in editor application I can use a composite container: struct CompositeObject { vector<Base *pBase> parts; }; But editor does not need a time optimization and can use dynamic_cast for parts to determine the exact type. In operate() I cannot use this solution. So, is it possible to not use a dynamic_cast and type information to solve this problem? Or maybe I should use another architecture?

    Read the article

  • Weird seg fault problem

    - by bluedaemon
    Greetings, I'm having a weird seg fault problem. My application dumps a core file at runtime. After digging into it I found it died in this block: #include <lib1/c.h> ... x::c obj; obj.func1(); I defined class c in a library lib1: namespace x { struct c { c(); ~c(); void fun1(); vector<char *> _data; }; } x::c::c() { } x::c::~c() { for ( int i = 0; i < _data.size(); ++i ) delete _data[i]; } I could not figure it out for some time till I ran nm on the lib1.so file: there are more function definitions than I defined: x::c::c() x::c::c() x::c::~c() x::c::~c() x::c::func1() x::c::func2() After searching in code base I found someone else defined a class with same name in same namespace, but in another library lib2 as follows: namespace x { struct c { c(); ~c(); void func2(); vector<string> strs_; }; } x::c::c() { } x::c::~c() { } My application links to lib2, which has dependency on lib1. This interesting behavior brings several questions: Why would it even work? I would expect a "multiple definitions" error while linking against lib2 (which depends upon lib1) but never had such. The application seems to be doing what's defined in func1 except it dumps a core at runtime. After attaching debugger, I found my application calls the ctor of class c in lib2, then calls func1 (defined in lib1). When going out of scope it calls dtor of class c in lib2, where the seg fault occurs. Can anybody teach me how this could even occur? How can I prevent such problems from happening again? Is there any C++ syntax I can use? Forgot to mention I'm using g++ 4.1 on RHEL4, thank you very much!

    Read the article

  • PHP/MySQL Interview - How would you have answered?

    - by martincarlin87
    I was asked this interview question so thought I would post it here to see how other users would answer: Please write some code which connects to a MySQL database (any host/user/pass), retrieves the current date & time from the database, compares it to the current date & time on the local server (i.e. where the application is running), and reports on the difference. The reporting aspect should be a simple HTML page, so that in theory this script can be put on a web server, set to point to a particular database server, and it would tell us whether the two servers’ times are in sync (or close to being in sync). This is what I put: // Connect to database server $dbhost = 'localhost'; $dbuser = 'xxx'; $dbpass = 'xxx'; $dbname = 'xxx'; $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die (mysql_error()); // Select database mysql_select_db($dbname) or die(mysql_error()); // Retrieve the current time from the database server $sql = 'SELECT NOW() AS db_server_time'; // Execute the query $result = mysql_query($sql) or die(mysql_error()); // Since query has now completed, get the time of the web server $php_server_time = date("Y-m-d h:m:s"); // Store query results in an array $row = mysql_fetch_array($result); // Retrieve time result from the array $db_server_time = $row['db_server_time']; echo $db_server_time . '<br />'; echo $php_server_time; if ($php_server_time != $db_server_time) { // Server times are not identical echo '<p>Database server and web server are not in sync!</p>'; // Convert the time stamps into seconds since 01/01/1970 $php_seconds = strtotime($php_server_time); $sql_seconds = strtotime($db_server_time); // Subtract smaller number from biggest number to avoid getting a negative result if ($php_seconds > $sql_seconds) { $time_difference = $php_seconds - $sql_seconds; } else { $time_difference = $sql_seconds - $php_seconds; } // convert the time difference in seconds to a formatted string displaying hours, minutes and seconds $nice_time_difference = gmdate("H:i:s", $time_difference); echo '<p>Time difference between the servers is ' . $nice_time_difference; } else { // Timestamps are exactly the same echo '<p>Database server and web server are in sync with each other!</p>'; } Yes, I know that I have used the deprecated mysql_* functions but that aside, how would you have answered, i.e. what changes would you make and why? Are there any factors I have omitted which I should take into consideration? The interesting thing is that my results always seem to be an exact number of minutes apart when executed on my hosting account: 2012-12-06 11:47:07 2012-12-06 11:12:07

    Read the article

  • Users and roles in context

    - by Eric W.
    I'm trying to get a sense of how to implement the user/role relationships for an application I'm writing. The persistence layer is Google App Engine's datastore, which places some interesting (but generally beneficial) constraints on what can be done. Any thoughts are appreciated. It might be helpful to keep things very concrete. I would like there to be organizations, users, test content and test administrations (records of tests that have been taken). A user can have the role of participant (test-taker), contributor of test material or both. A user can also be a member of zero or more organizations. In the role of participant, the user can see the previous administrations of tests he or she has taken. The user can also see a test administration of another participant if that participant has given the user authorization. The user can see test material that has been made public, and he or she can see restricted content as a participant during a specific administration of a test for which that user has been authorized by an organization. As a member of an organization, the user can see restricted content in the role of contributor, and he or she might or might not also be able to edit the content. Each organization should have one or more administrators that can determine whether a member can see and edit content and determine who has admin privileges. There should also be one or more application-wide superusers that can troubleshoot and solve problems. Members of organizations can see the administrations of tests that the participants concerned have authorized them to see, and they can see anonymous data if no authorization has been given. A user cannot see the test results of another user in any other circumstances. Since there are no joins in the App Engine datastore, it might be necessary to have things less normalized than usual for the typical SQL database in order to ensure that queries that check permissions are fast (e.g., ones that determine whether a link is to be displayed). My questions are: How do I move forward on this? Should I spend a lot of time up front in order to get the model right, or can I iterate several times and gradually roll in additional complexity? Does anyone have some general ideas about how to break things up in this instance? Are there any GAE libraries that handle roles in a way that is compatible with this arrangement?

    Read the article

  • How To Start Your Own Professional Blog with WordPress

    - by Matthew Guay
    Would you like to start your own blog or website?  With a free WordPress  account, it’s free and easy to get started creating your own professional quality blog site. This is the first part in a series on how to create your own professional quality blog site. No, we’re not talking about some cheapo looking blog from Blogger or something on Facebook, but creating a quality blog you can be proud of and present to millions of readers online. WordPress is one of the most popular blogging platforms, powering hundreds of high-profile websites and blogs around the world.  It’s both powerful and easy to use, which makes it great whether you’re just starting out or are a blogging pro.  To start out with your blogging project WordPress is completely free, and you can use the online interface or install the WordPress software on your own server and blog from there. Getting Started You can start a blog in just a few minutes.  Head over to WordPress.com and click Sign up now on the right-hand side of the main page. Enter a username and password, check that you agree with the legal terms, select the “Gimme a blog” bullet, and click Next. WordPress may inform you that your username is already taken, simply choose a new one and try again. Next, choose a domain for your blog.  This will be the address for your site, and cannot be changed, so be sure to choose exactly what you want.  If you’d prefer your address to be yourname.com instead of yourname.wordpress.com, you can add your own domain for a fee after your blog is setup…but we’ll cover that later. Once you click signup, you will be sent a confirmation email.  While you wait for the email to arrive you can go ahead and enter in your name and a short bio about yourself. When you receive your confirmation email, click the link.  Congratulations; you now have your own blog! You can view your new blog immediately, though the default theme isn’t very interesting without your content and pictures. Back on the page you opened from the email, click Login to access your blog’s administration page and to start adding stuff to your blog.  You can also access your blog’s admin page anytime by from yourname.wordpress.com/admin, substituting your own blog name for yourname. Enter your username and password, then click Log in to get started. Adding Content to your WordPress.com Blog When you sign in to your WordPress blog, you’ll first see the WordPress Admin page.  Here you can see recent posts and comments, and you can see stats of how many people have visited your site.  You can also access all of your blog tools and settings right from this page. To add a new post to your blog, click the Posts link on the left, then click “Add New” either on the left menu or on the top of the Edit Posts page.  Or, if you want to edit the default first post, hover over it and select Edit. Or click the New Posts button on the top of the page.  This menu bar is always visible whenever you’re logged in, so it’s an easy way to add a post. The editor lets you easily write anything you want in a Microsoft Word-style editor.  You can format your text, add lists, links, quotes, and more.  When you’re ready to share your content with the world, click Publish on the right side. To add pictures or other files, click the picture icon beside “Upload/Insert”.  Your free blog account can store up to 3Gb of pictures and documents which will definitely give you a good start. Click Select Files, and then choose the pictures or documents you want to add to your post. When the pictures have uploaded, you can add a caption and choose how to position the picture.  When you’re finished, select “Insert into Post”.   Or, if you want to add a video, click the video button.  You have to add a paid upgrade to upload videos directly, but you can add YouTube and other online videos for free. Click the “From URL” tab, and then paste the link to the YouTube video and click Insert into post. If you’re a code geek, click the HTML tab in the editor and edit the HTML of your blog post the geeky way. Once you’ve added all your content and edited it the way you want, click the Publish button on the right of the editor.  Or, you can click Preview to make sure it looks right, and then click Publish. Here’s our blog with the new blog post containing a picture and video.  While you’re getting to know you’re way around the controls in WordPress, the Preview feature will be your best friend while you try to organize the content to your liking.   Conclusion It only takes a couple minutes to get started blogging at WordPress.com. Whether you want to write about your daily life, share pictures of your children, or review the latest books and gadgets, WordPress.com is a great place to get started for free.  But we’ve only covered a small portion of the WordPress features…but this should get you started. Check back for more WordPress and blogging coverage coming up soon! Links Signup for a free WordPress.com account Similar Articles Productive Geek Tips Add Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareProtecting Your WordPress Admin Panel From Hackers With .htaccessMake a Backup Copy of your Production Wordpress Blog on UbuntuLinux QuickTip: Downloading and Un-tarring in One Step TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Week in Geek: USDA Chooses Microsoft for Cloud Services Edition

    - by Asian Angel
    This week we learned how to create geeky LED holiday lights with old bottles, dig deeper in Windows Defrag via the command prompt, use Google Chrome’s drag/drop feature to upload files easier, find great gift recommendations by looking through the How-To Geek holiday gift guide, and have fun adding Merry Christmas fonts to our computers. Photo by ntr23. Random Geek Links It has been a busy week, so we have extra news link goodness with information that is good for you to know. USDA making the move to Microsoft The U.S. Department of Agriculture has announced that it has chosen Microsoft to host things like e-mail, instant messaging, and collaboration through the software giant’s Business Productivity Online Suite. Google says it was cut off from USDA project bid Google is claiming that it was not given a chance to bid on a cloud-computing project for the U.S. Department of Agriculture, for which the contract was awarded to rival Microsoft. Apache is being forced into a Java Fork When Oracle rolled over Apache and Google’s objections to its Java plans in December, the scene was set for Apache to leave and, eventually, force a Java code fork. Tumblr explains daylong outage After experiencing an outage that started on Sunday afternoon and stretched through most of the day yesterday, Tumblr has explained what happened. Google demos Chrome OS, launches pilot program During a press briefing this week in San Francisco, Google launched the Chrome application store and demonstrated Chrome OS, its browser-centric netbook operating system. Don’t expect Spotify in U.S. this holiday season As of last week, Spotify had yet to sign a single licensing deal with a major label, after spending more than a year negotiating, multiple music sources told CNET. December 2010 Patch Tuesday will come with most bulletins ever According to the Microsoft Security Response Center, Microsoft will issue 17 Security Bulletins addressing 40 vulnerabilities on Tuesday, December 14. It will also host a webcast to address customer questions the following day. Hacker plants back door in Symbian firmware Indian hacker Atul Alex has had a look at the firmware for Symbian S60 smartphones and come up with a back door for it. PC quarantines raise tough complexities The concept of quarantining PCs to prevent widespread infection is “interesting, but difficult to implement, with far too many problems”, said security experts. Symantec: DDoS attacks hard to defend It has surfaced that the distributed denial of service (DDoS) attacks on Visa and MasterCard Web sites on Wednesday were carried out by a toolkit known as low orbit ion cannon (LOIC). Web Sockets and the risks of unfinished standards Enthusiasm for a promising new standard called Web Sockets has quickly cooled in some quarters as a potential security problem led some browser makers to hastily postpone support. Internet Explorer 9 to get tracking protection Microsoft is making changes to Internet Explorer 9’s security features that will better enable users to keep sites from tracking their activity across browsing sessions. NASA sold PCs with sensitive data NASA failed to remove sensitive data from computers that it sold, according to an audit report released this week. Cybercrooks create fake Amazon receipts The bad guys have created yet another online scam, this one involving fake Amazon receipts. World of Warcraft character move fees waived Until December 22, Blizzard will allow free realm transfers from 25 highly populated servers to alleviate log-in queues or performance issues. (The free transfers are one-way and one-time only.) SpaceX Dragon reaches orbit atop a Falcon with a fiery tail The Space Exploration Technologies corporation has become the first nongovernmental entity to put a vehicle into low Earth orbit. Geek Video of the Week If birds have wings, then why are the Angry Birds using slingshots? Photo by Dorkly Bits. Wait… Birds have Wings, Why are the Angry Ones Using Slingshots? Sysadmin Geek Tips How To Setup Email Alerts on Linux Using Gmail or SMTP Linux machines may require administrative intervention in countless ways, but without manually logging into them how would you know about it? Here’s how to setup emails to get notified when your machines want some tender love and attention. Random TinyHacker Links Red Panda Webcam Support Firefox and the Knoxville Zoo’s Red Panda program. Christmas Icons (Icons we like) Superb set of holiday icons by lgp85 at deviantArt. Download the .zip and use as .png or convert to .ico at Convertico.com or with tiny app Imagicon. Super User Questions Enjoy reading the great answers to this week’s popular questions from Super User Useful USB boot disks? DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder? What are other ways to backup my files if I do not have an external drive? Anti virus what is the difference between these all? How can I block all Facebook elements/content? How-To Geek Weekly Article Recap Have you had a busy week between work and preparing for the holidays? Get caught up on your HTG reading with our hottest articles of the week. 20 Windows Keyboard Shortcuts You Might Not Know The 50 Best Registry Hacks that Make Windows Better LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology HTG Explains: Which Linux File System Should You Choose? How to Use and Customize Google Chrome Web Apps One Year Ago on How-To Geek This week’s batch of retro geeky goodness is all about customizing Windows 7. ClassicShell Adds Classic Start Menu and Explorer Features to Windows 7 Get an Aero-Styled Classic Start Menu in Windows 7 Customize the Windows 7 Logon Screen Get the Classic Style Network Activity Indicator Back in Windows 7 How To Enable Check Boxes for Items In Windows 7 The Geek Note We would like you to join us in welcoming Jason Fitzpatrick to the writing staff here at How-To Geek. He started with us this past week, so take some time to read through his articles about the Wii, Kindle, & PlayStation 2 Peripherals and leave a friendly comment to say “Hi”! Got a great tip to share? Make sure to send it in to us at [email protected]. Photo by real00. Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • Embedding ADF UI Components into OAF regions

    - by Juan Camilo Ruiz
    Having finished the 2 Webcast on ADF integration with Oracle E-Business Suite, Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team and I are going to continue adding entries to the series on this topic, trying to cover as many use cases as possible. In this entry, Sara created an overview on how Oracle ADF pages can be embedded into an Oracle Application Framework region. This is a very interesting approach that will enable those of you who are exploring ADF as a technology stack to enhanced some of the Oracle E-Business Suite flows and leverage your skill on Oracle Applications Framework (OAF). In upcoming entries we will start unveiling the internals needed to achieve session sharing between the regions. Stay tuned for more entries and enjoy this new post.   Document Scope This document only covers information that is specific to embedding an Oracle ADF page in an Oracle Application Framework–based page. It assumes knowledge of Oracle ADF and Oracle Application Framework development. It also assumes knowledge of the material in My Oracle Support Note 974949.1, “Oracle E-Business Suite SDK for Java” and My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Prerequisite Patch Download Patch 12726556:R12.FND.B from My Oracle Support and install it. The implementation described below requires Patch 12726556:R12.FND.B to provide the accessors for the ADF page. This patch is required in addition to the Oracle E-Business Suite SDK for Java patch described in My Oracle Support Note 974949.1. Development Environments You need two different JDeveloper environments: Oracle ADF and OA Framework. Oracle ADF Development Environment You build your Oracle ADF page using JDeveloper 11g. You should use JDeveloper 11g R1 (the latest is 11.1.1.6.0) if you need to use other products in the Oracle Fusion Middleware Stack, such as Oracle WebCenter, Oracle SOA Suite, or BI. You should use JDeveloper 11g R2 (the latest is 11.1.2.3.0) if you do not need other Oracle Fusion Middleware products. JDeveloper 11g R2 is an Oracle ADF-specific release that supports the latest Java EE standards and has various core improvements. Oracle Application Framework Development Environment Build your OA Framework page using a development environment corresponding to your Oracle E-Business Suite version. You must use Release 12.1.2 or later because the rich content container was introduced in Release 12.1.2. See “OA Framework - How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x” (My Oracle Support Doc ID 416708.1). Building your Oracle ADF Page Typically you build your ADF page using the session management feature of the Oracle E-Business Suite SDK for Java as described in My Oracle Support Note 974949.1. Also see My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Building an ADF Page with the Hierarchy Viewer If you are using the ADF hierarchy viewer, you should set up the structure and settings of the ADF page as follows or the hierarchy viewer may not fill the entire area it is supposed to fill (especially a problem in Firefox). Create a stretchable component as the parent component for the hierarchy viewer, such as af:panelStretchLayout (underneath the af:form component in the structure). Use af:panelStretchLayout for Oracle ADF 11.1.1.6 and earlier. For later versions of Oracle ADF, use af:panelGridLayout. Create your hierarchy viewer component inside the stretchable component. Create Function in Oracle E-Business Suite Instance In your Oracle E-Business Suite instance, create a function for your ADF page with the following parameters. You can use either the Functions window in the System Administrator responsibility or the Functions page in the Functional Administrator responsibility. Function Function Name Type=External ADF Function (ADFX) HTML Call=GWY.jsp?targetPage=faces/<your ADF page> ">You must also add your function to an Oracle E-Business Suite menu or permission set and set up function security or role-based access control (RBAC) so that the user has authorization to access the function. If you do not want the function to appear on the navigation menu, add the function without a menu prompt. See the Oracle E-Business Suite System Administrator's Guide Documentation Set for more information. Testing the Function from the Oracle E-Business Suite Home Page It’s a good idea to test launching your ADF page from the Oracle E-Business Suite Home Page. Add your function to the navigation menu for your responsibility with a prompt and try launching it. If your ADF page expects parameters from the surrounding page, those might not be available, however. Setting up the Oracle Application Framework Rich Container Once you have built your Oracle ADF 11g page, you need to embed it in your Oracle Application Framework page. Create Rich Content Container in your OA Framework JDeveloper environment In the OA Extension Structure pane for your OAF page, select the region where you want to add the rich content, and add a richContainer item to the region. Set the following properties on the richContainer item: id Content Type=Others (for Release 12.1.3. This property value may change in a future release.) Destination Function=[function code] Width (in pixels or percent, such as 100%) Height (in pixels) Parameters=[any parameters your Oracle ADF page is expecting to receive from the Oracle Application Framework page] Parameters In the Parameters property, specify parameters that will be passed to the embedded content as a list of comma-separated, name-value pairs. Dynamic parameters may be specified as paramName={@viewAttr}. Dynamic Rich Content Container Properties If you want your rich content container to display a different Oracle ADF page depending on other information, you would set up a different function for each different Oracle ADF page. You would then set the Destination Function and Parameters properties programmatically, instead of setting them in the Property Inspector. In the processRequest() method of your Oracle Application Framework page controller, where OAFRichContentPage is the ID of your richContainer item and the parameters are whatever parameters your ADF page expects, your code might look similar to this code fragment: OARichContainerBean richBean = (OARichContainerBean) webBean.findChildRecursive("OAFRichContentPage"); if(richBean != null){ if(isFirstCondition){ richBean.setFunctionName("ADF_EXAMPLE_EMBEDDED"); richBean.setParameters("ParamLoginPersonId="+loginPersonId +"&ParamPersonId="+personId+"&ParamUserId="+userId +"&ParamRespId="+respId+"&ParamRespApplId="+respApplId +"&ParamFromOA=Y"+"&ParamSecurityGroupId="+securityGroupId); } else if(isSecondCondition){ richBean.setFunctionName("ADF_EXAMPLE_OTHER_FUNCTION"); richBean.setParameters("ParamLoginPersonId=" +loginPersonId+"&ParamPersonId="+personId +"&ParamUserId="+userId+"&ParamRespId="+respId +"&ParamRespApplId="+respApplId +"&ParamFromOA=Y" +"&ParamSecurityGroupId="+securityGroupId); } }

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • SharePoint - Summing Calculated Columns By Groups (DVWP)

    - by Mark Rackley
    I had a problem… okay.. okay.. so I have many problems… but let’s focus on one in particular or this blog post would never end… okay? Thank you…. So, I had an electronic timesheet where users entered hours for each day of the week. It also had a “Week Total” column which was a calculated column of the sum. The calculated column looked like this: Pretty easy.. nothing spectacular. So, what’s the problem? WELL……………….. There is a row in the timesheet for each task a person worked on in a given week. So, if you worked on 4 tasks, you would have 4 rows of data, and 4 week totals for that week: This is all fine and dandy, but I want to know what the total was for the entire week. Yes.. I realize the answer is 24 from my example… I mean, I know how to add! I just want SharePoint to display it for me for the executives (we all know, they have math problems).  You may be thinking, hey genius (in a sarcastic tone of course), why don’t you just go to the view and total on the “Week Total” field. What a brilliant idea! Why didn’t I think of that… let’s go to the view and do just that…. Ohhhhhh… you can’t total on a Calculated Column.. it’s not even an option…  Yeah… I had the same moment. So, what do you do? Well… what do you think I did? 1) Googled “SharePoint total calculated column” 2) Said it couldn’t be done 3) Took a nap 4) Asked the question on twitter? The correct answer of course is number 4… followed by number 3… although I may have told my boss number 2 so that I look more brilliant than I am? It’s safe to say I did NOT try to find the solution on my own doing step 1… that would be just WAY to easy… So, anyway, I posted the question on Twitter and it turns out several people had suggestions from using jQuery to using DVWPs. I tend to be a big fan of the DVWP except for the disgusting process of deploying them to another farm.. ugh… just shoot me…. so, that is the solution I went with. Laura Rogers (@WonderLaura) has a super duper easy to follow video on the subject over at EndUserSharePoint.com: SharePoint: Displaying Calculated Column SUMS in a View (Screencast) Laura’s video was very easy to follow and was ALMOST exactly what I needed. She does a great job walking you through every step of summing up a calculated field which was PART of my problem. The other part was my list is grouped by date! So, I wanted to see for a given week, the summed “Week Total” of hours. Laura got me on the right track with her video and I dug a little deeper into the DVWP to accomplish my task. So, here are the steps you follow: 1. Click on the "chevron” (I didn’t know it was actually called that until I heard Laura say it).. I always call it the “little-button-in-the-top-right-corner-with-the-greater-than-sign”.. but “chevron” is much shorter. So, click on the chevron, click on “Sort and Group”. The Add the field you want to group by, in my example it is the “Monday Date” of the timesheet entry. Make sure to check the check boxes for “Show Group Header” AND “Show Group Footer”. Click “OK”. The view now shows the count of each grouped set of data: Interesting, this looks very similar to Laura’s video… right? So, let’s take a look at the code for the Count: Count : <xsl:value-of select="count($nodeset)" /> Wow, also very similar… except in Laura’s video it looks like: Count : <xsl:value-of select="count($Rows)" /> So.. the only difference is that instead of $Rows we have $nodeset. It turns out the $nodeset will go through each Row in the group just like $Rows goes through each row in the entire view. So, using the exact same logic as in Laura’s blog except replacing $Rows with $nodeset we get the functionality of being able to sum up the values for a group. So, I want to replace “Count: #” with the total hours, this is done using the following changes to the above code: Week Total : <xsl:value-of select="sum($nodeset/@Monday)+sum($nodeset/@Tuesday) +sum($nodeset/@Wednesday)+sum($nodeset/@Thursday)+sum($nodeset/@Friday) +sum($nodeset/@Saturday)+sum($nodeset/@Sunday)" /> Our final output has the summed hours for each group! So… long story short… follow Laura’s blog, then group your list, then replace “$Rows” with “$nodeset”. One caveat, this will not work if you group by a person field. For some reason the person field does not go through each row in the group. I haven’t dug into this much yet. Maybe if I find some time… whatever that is… Anyway, Laura did all the work, I just took it one small step forward… as always, feel free to leave any additional insights you may have. We’re all learning here!

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • Tips on installing Visual Studio 2010 SP1

    - by Jon Galloway
    Visual Studio SP1 went up on MSDN downloads (here) on March 8, and will be released publicly on March 10 here. Release announcements: Soma: Visual Studio 2010 enhancements Jason Zander: Announcing Visual Studio 2010 Service Pack 1 I started on this post with tips on installing VS2010 SP1 when I realized I’ve been writing these up for Visual Studio and .NET framework SP releases for a while (e.g. VS2008 / .NET 3.5 SP1 post, VS2005 SP1 post). Looking back the years of Visual Studio SP installs (and remembering when we’d get up to SP6 for a Visual Studio release), I’m happy to see that it just keeps getting easier. Service Packs are a lot less finicky about requiring beta software to be uninstalled, install more quickly, and are just generally a lot less scary. If I can’t have a jetpack, at least my future provided me faster, easier service packs. Disclaimer: These tips are just general things I've picked up over the years. I don't have any inside knowledge here. If you see anything wrong, be sure to let me know in the comments. You may want to check the readme file before installing - it's short, and it's in that new-fangled HTML format. On with the tips! Before starting, uninstall Visual Studio features you don't use Visual Studio service packs (and other Microsoft service packs as well) install patches for the specific features you’ve got installed. This is a big reason to always do a custom install when you first install Visual Studio, but it’s not difficult to update your existing installation. Here’s the quick way to do that: Tap the windows key and type “add or remove programs” and press enter (or click on the “Add or remove programs” link if you must).   Type “Visual Studio 2010” in the search box in the upper right corner, click on the Visual Studio program (the one with the VS infinity looking logo) and click on Uninstall/Change. Click on Add or Remove Features The next part’s up to you – what features do you actually use? I’ve been doing primarily ASP.NET MVC development in C# lately, so I selected Visual C# and Visual Web Developer. Remember that you can install features later if needed, and can also install the express versions if you want. Selecting everything just because it’s there - or you paid for it – means that you install updates for everything, every time. When you’ve made your changes, click on the Update button to uninstall unused features. Shut down all instances of Visual Studio It probably goes without saying that you should close a program down before installing it, partly to avoid the file-in-use-reboot-after-install horror. Additional "hunch / works on my machine" quality tip: On one computer I saw a note in the setup log about Visual Studio a prompt for user input to close Visual Studio, although I never saw the prompt. Just to  be sure, I'd personally open up Task Manager and kill any devenv.exe processes I saw running, as it couldn't hurt. Use the web installer I use the Web Installers whenever possible. There’s no point in downloading the DVD unless you’re doing multiple installs or won’t have internet access. The DVD IS is 1.5GB, since it needs to be able to service every possible supported installation option on both x86 and x64. The web installer is 776 KB (smaller than calc.exe), so you can start the installation right away. Like other web installers, the real benefit is that it only installs the updates you need (hence the reason for step 1 – uninstalling unused components). Instead of 1.5GB, my download was roughly 530MB. If you’re installing from MSDN (this link takes you right to the Visual Studio installs), select the first one on the list: The first step in the installation process is to analyze the machine configuration and tell you what needs to be installed. Since I've trimmed down my features, that's a pretty short list. The time's not far off where I may not install SQL Server on my dev machines, just using SQL Server Compact - that would shorten the list further. When I hit next, you can see that the download size has shrunk considerably. When I start the install, note that the installation begins while other components are downloading - another benefit of the web install. On my mid-range desktop machine, the install took 25 minutes. What if it takes longer? According to Heath Stewart (Visual Studio installer guru), average SP1 installs take roughly 45 minutes. An installation which takes hours to complete may be a sign of a problem: see his post Visual Studio 2010 Service Pack 1 installing for over 2 hours could be a sign of a problem. Why so long? Yes, even 25 minutes is a while. Heath's got another blog post explaining why the update can take longer than the initial install (see: A patch may take as long or longer to install than the target product) which explains all the additional steps and complexities a patch needs to deal with, as well as some mitigation steps that deployment authors can take to mitigate the impact. Other things to know about Visual Studio 2010 SP1 Installs over Visual Studio 2010 SP1 Beta That's nice. Previous Visual Studio versions did a number of annoying things when you installed SP's over beta's - fail with weird errors, get part way through and tell you needed to cancel and uninstall first, etc. I've installed this on two machines that had random beta stuff installed without tears. That Readme file you didn't read I mentioned the readme file earlier (http://go.microsoft.com/fwlink/?LinkId=210711 ). Some interesting things I picked up in there: 2.1.3. Visual Studio 2010 Service Pack 1 installation may fail when a USB drive or other removeable drive is connected 2.1.4. Visual Studio must be restarted after Visual Studio 2010 SP1 tooling for SQL Server Compact (Compact) 4.0 is installed 2.2.1. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled to restore certain components 2.2.2. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled before SP1 can be installed again 2.4.3.1. Async CTP If you installed the pre-SP1 version of Async CTP but did not uninstall it before you installed Visual Studio 2010 SP1, then your computer will be in a state in which the version of the C# compiler in the .NET Framework does not match the C# compiler in Visual Studio. To resolve this issue: After you install Visual Studio 2010 SP1, reinstall the SP1 version of the Async CTP from here. Hardware acceleration for Visual Studio is disabled on Windows XP Visual Studio 2010 SP1 disables hardware acceleration when running on Windows XP (only on XP). You can turn it back on in the Visual Studio options, under Environment / General, as shown below. See Jason Zander's post titled Performance Troubleshooting Article and VS2010 SP1 Change.

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • SQLAuthority News – Pluralsight Course Review – Practices for Software Startups – Part 1 of 2

    - by pinaldave
    This is first part of the two part series of Practices for Software Startup Pluralsight Course. The course is written by Stephen Forte (Blog | Twitter). Stephen Forte is the Chief Strategy Officer of the venture backed company, Telerik, a leading vendor of developer and team productivity tools. Stephen is also a Certified Scrum Master, Certified Scrum Professional, PMP, and also speaks regularly at industry conferences around the world. He has written several books on application and database development.  Stephen is also a board member of the Scrum Alliance. Startups – Everybodies Dream Start-up companies are an important topic right now – everyone wants to start their own business.  It is also important to remember that all companies were a start up at one point – from your corner store to the giants like Microsoft and Apple.  Research proves that not every start-up succeeds, in fact, most will fail before their first year.  There are many reasons for this, and this could be due to the fact that there are many stages to a start-up company, and stumbling at any of these stages can lead to failure.  It is important to understand what makes a start-up company succeed at all its hurdles to become successful.  It is even important to define success.  For most start-ups this would mean becoming their own independently functioning company or to be bought out for a hefty profit by a larger company.  The idea of making a hefty profit by living your dream is extremely important, and you can even think of start-ups as the new craze.  That’s why studying them is so important – they are very popular, but things have changed a lot since their inception. Starting the Startups Beginning a start-up company used to be difficult, but now facilities and information is widely available, and it is much easier.  But that means it is much easier to fail, also.  Previously to start your own company, everything was planned and organized, resources were ensured and backed up before beginning; even the idea of starting your own business was a big thing.  Now anybody can do it, and the steps are simple and outlines everywhere – you can get online software and easily outsource , cloud source, or crowdsource a lot of your material.  But without the type of planning previously required, things can often go badly. New Products – New Ideas – New World There are so many fantastic new products, but they don’t reach success all the time.  I find start-up companies very interesting, and whenever I meet someone who is interested in the subject or already starting their own company, I always ask what they are doing, their plans, goals, market, etc.  I am sorry to say that in most cases, they cannot answer my questions.  It is true that many fantastic ideas fail because of bad decisions.  These bad decisions were not made intentionally, but people were simply unaware of what they should be doing.  This will always lead to failure.  But I am happy to say that all these issues can be gone because Pluralsight is now offering a course all about start-ups by Stephen Forte.  Stephen is a start up leader.  He has successfully started many companies and most are still going strong, or have gone on to even bigger and better things. Beginning Course on Startup I have always thought start-ups are a fascinating subject, and decided to take his course, but it is three hours long.  This would be hard to fit into my busy work day all at once, so I decided to do half of his course before my daughter wakes up, and the other half after she goes to sleep.  The course is divided into six modules, so this would be easy to do.  I began the first chapter early in the morning, at 5 am.  Stephen jumped right into the middle of the subject in the very first module – designing your business plan.  The first question you will have to answer to yourself, to others, and to investors is: What is your product and when will we be able to see it?  So a very important concept is a “minimal viable product.”  This means setting goals for yourself and your product.  We all have large dreams, but your minimal viable product doesn’t have to be your final vision at the very first.  For example: Apple is a giant company, but it is still evolving.  Steve Jobs didn’t envision the iPhone 6 at the very beginning.  He had to start at the first iPhone and do his market research, and the idea evolved into the technology you see now.  So for yourself, you should decide a beginning and stop point.  Do your market research.  Determine who you want to reach, what audience you want for your product.  You can have a great idea that simply will not work in the market, do need, bottlenecks, lack of resources, or competition.  There is a lot of research that needs to be done before you even write a business plan, and Stephen covers it in the very first chapter. The Team – Unique Key to Success After jumping right into the subject in the very first module, I wondered what Stephen could have in store for me for the rest of the course.  Chapter number two is building a team.  Having a team is important regardless of what your startup is.  You can be a true visionary with endless ideas and energy, but one person can still not do everything.  It is important to decide from the very beginning if you will have cofounders, team leaders, and how many employees you’ll need.  Even more important, you’ll need to decide what kind of team you want – what personalities, skills, and type of energy you want each of your employees to bring.  Do you want to have an A+ team with a B- idea, or do you have a B- idea that needs an A+ team to sell it?  Stephen asks all the hard questions!  I was especially impressed by his insight on developing.  You have to decide if you need developers, how many, and what their skills should be. I found this insight extremely useful for everyday usage, not just for start-up companies.  I would apply this kind of information in management at any position.  An amazing team will build an amazing product – and that doesn’t matter if you’re a start-up company or a small team working for a much larger business. Customer Development – The Ultimate Obective Chapter three was about customer development. According to Stephen, there are four different steps to develop a customer base.  The first question to ask yourself is if you are envisioning a large customer base buying a few products each, or a small, dedicated base that buys a lot of your product – quantity vs. Quality.  He also discusses how to earn, retain, and get more customers.  He also says that each customer should be placed in a different role – some will be like investors, who regularly spend with you and invest their money in your business.  It is then your job to take that investment and turn it into a better product in the future.  You need to deal with their money properly – think of it is as theirs as investors, not yours as profit.  At the end of this module I felt that only Stephen could provide this kind of insight, and then he listed all the resources he took his information from.  I have never seen a group of people so passionate about their customers. It was indeed a long day for me. In tomorrow’s part 2 we will discuss rest of the three module and also will see a quick video of the Practices for Software Startup Pluralsight Course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Understanding 400 Bad Request Exception

    - by imran_ku07
        Introduction:          Why I am getting this exception? What is the cause of this error. Developers are always curious to know the root cause of an exception, even though they found the solution from elsewhere. So what is the reason of this exception (400 Bad Request).The answer is security. Security is an important feature for any application. ASP.NET try to his best to give you more secure application environment as possible. One important security feature is related to URLs. Because there are various ways a hacker can try to access server resource. Therefore it is important to make your application as secure as possible. Fortunately, ASP.NET provides this security by throwing an exception of Bad Request whenever he feels. In this Article I am try to present when ASP.NET feels to throw this exception. You will also see some new ASP.NET 4 features which gives developers some control on this situation.   Description:   http.sys Restrictions:           It is interesting to note that after deploying your application on windows server that runs IIS 6 or higher, the first receptionist of HTTP request is the kernel mode HTTP driver: http.sys. Therefore for completing your request successfully you need to present your validity to http.sys and must pass the http.sys restriction.           Every http request URL must not contain any character from ASCII range of 0x00 to 0x1F, because they are not printable. These characters are invalid because these are invalid URL characters as defined in RFC 2396 of the IETF. But a question may arise that how it is possible to send unprintable character. The answer is that when you send your request from your application in binary format.           Another restriction is on the size of the request. A request containg protocal, server name, headers, query string information and individual headers sent along with the request must not exceed 16KB. Also individual header should not exceed 16KB.           Any individual path segment (the portion of the URL that does not include protocol, server name, and query string, for example, http://a/b/c?d=e,  here the b and c are individual path) must not contain more than 260 characters. Also http.sys disallows URLs that have more than 255 path segments.           If any of the above rules are not follow then you will get 400 Bad Request Exception. The reason for this restriction is due to hack attacks against web servers involve encoding the URL with different character representations.           You can change the default behavior enforced by http.sys using some Registry switches present at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters    ASP.NET Restrictions:           After passing the restrictions enforced by the kernel mode http.sys then the request is handed off to IIS and then to ASP.NET engine and then again request has to pass some restriction from ASP.NET in order to complete it successfully.           ASP.NET only allows URL path lengths to 260 characters(only paths, for example http://a/b/c/d, here path is from a to d). This means that if you have long paths containing 261 characters then you will get the Bad Request exception. This is due to NTFS file-path limit.           Another restriction is that which characters can be used in URL path portion.You can use any characters except some characters because they are called invalid characters in path. Here are some of these invalid character in the path portion of a URL, <,>,*,%,&,:,\,?. For confirming this just right click on your Solution Explorer and Add New Folder and name this File to any of the above character, you will get the message. Files or folders cannot be empty strings nor they contain only '.' or have any of the following characters.....            For checking the above situation i have created a Web Application and put Default.aspx inside A%A folder (created from windows explorer), then navigate to, http://localhost:1234/A%25A/Default.aspx, what i get response from server is the Bad Request exception. The reason is that %25 is the % character which is invalid URL path character in ASP.NET. However you can use these characters in query string.           The reason for these restrictions are due to security, for example with the help of % you can double encode the URL path portion and : is used to get some specific resource from server.   New ASP.NET 4 Features:           It is worth to discuss the new ASP.NET 4 features that provides some control in the hand of developer. Previously we are restricted to 260 characters path length and restricted to not use some of characters, means these characters cannot become the part of the URL path segment.           You can configure maxRequestPathLength and maxQueryStringLength to allow longer or shorter paths and query strings. You can also customize set of invalid character using requestPathInvalidChars, under httpruntime element. This may be the good news for someone who needs to use some above character in their application which was invalid in previous versions. You can find further detail about new ASP.NET features about URL at here           Note that the above new ASP.NET settings will not effect http.sys. This means that you have pass the restriction of http.sys before ASP.NET ever come in to the action. Note also that previous restriction of http.sys is applied on individual path and maxRequestPathLength is applied on the complete path (the portion of the URL that does not include protocol, server name, and query string). For example, if URL is http://a/b/c/d?e=f, then maxRequestPathLength will takes, a/b/c/d, into account while http.sys will take a, b, c individually.   Summary:           Hopefully this will helps you to know how some of initial security features comes in to play, but i also recommend that you should read (at least first chapter called Initial Phases of a Web Request of) Professional ASP.NET 2.0 Security, Membership, and Role Management by Stefan Schackow. This is really a nice book.

    Read the article

  • Customize your icons in Windows 7 and Vista

    - by Matthew Guay
    Want to change out the icons on your desktop and more?  Personalizing your icons is a great way to make your PC uniquely yours,, and today we show you how to grab unique icons, and default Winnows. to be your own. Change the icon for Computer, Recycle Bin, Network, and your User folder Right-click on the desktop, and select Personalize. Now, click the “Change desktop icons” link on the left sidebar in the Personalization window. The window looks slightly different in Windows Vista, but the link is the same. Select the icon you wish to change, and click the Change Icon button.  In Windows 7, you will also notice a box to choose whether or not to allow themes to change icons, and you can uncheck it if you don’t want themes to change your icon settings. You can select one of the other included icons, or click browse to find the icon you want.  Click Ok when you are finished. Change Folder icons You can easily change the icon on most folders in Windows Vista and 7.  Simply right-click on the folder and select properties. Click the Customize tab, and then click the Change Icon button.  This will open the standard dialog to change your icon, so proceed as normal. This basically just creates a hidden desktop.ini file in the folder containing the following or similar data: [.ShellClassInfo]IconFile=%SystemRoot%\system32\SHELL32.dllIconIndex=20 You could manually create or edit the file if you choose, instead of using the dialogs. Simply create a new text file named desktop.ini with this same information, or edit the existing one.  Change the IconFile line to the location of your icon. If you are pointing to a .ico file you should change the IconIndex line to 0 instead. Note that this isn’t available for all folders, for instance you can’t use this to change the icon for the Windows folder.   In Windows 7, please note that you cannot change the icon of folder inside a library.  So if you are browsing your Documents library and would like to change an icon in that folder, right-click on it and select Open folder location.  Now you can change the icon as above. And if you would like to change a Library’s icon itself, then check out this tutorial: Change Your Windows 7 Library Icons the Easy Way Change the icon of any file type Want to make you files easier to tell apart?  Check out our tutorial on how to simply do this: Change a File Type’s Icon in Windows 7 Change the icon of any Application Shortcut To change the icon of a shortcut on your desktop, start menu, or in Explorer, simply right-click on the icon and select Properties. In the Shortcut tab, click the Change Icon button. Now choose one of the other available icons or click browse to find the icon you want. Change Icons of Running Programs in the Windows 7 taskbar If your computer is running Windows 7, you can customize the icon of any program running in the taskbar!  This only works on applications that are running but not pinned to the taskbar, so if you want to customize a pinned icon you may want to unpin it before customizing it.  But the interesting thing about this trick is that it can customize any icon anything running in the taskbar, including things like Control Panel! Right-click or click and push up to open the jumplist on the icon, and then right-click on the program’s name and select Properties.  Here we are customizing Control Panel, but you can do this on any application icon. Now, click Change Icon as usual. Select an icon you want (We switched the Control Panel icon to the Security Shield), or click Browse to find another icon.  Click Ok when finished, and then close the application window. The next time you open the program (or Control Panel in our example), you will notice your new icon on its taskbar icon. Please note that this only works on applications that are currently running and are not pinned to the taskbar.  Strangely, if the application is pinned to the taskbar, you can still click Properties and change the icon, but the change will not show up. Change the icon on any Drive on your Computer You can easily change the icon on your internal hard drives and portable drives with the free Drive Icon Changer application.  Simply download and unzip the file (link below), and then run the application as administrator by right-clicking on the icon and selecting “Run as administrator”. Now, select the drive that you want to change the icon of, and select your desired icon file. Click Save, and Drive Icon Changer will let you know that the icon has been changed successfully. You will then need to reboot your computer to complete the changes.  Simply click Yes to reboot. Now, our Drive icon is changed from this default image: to a Laptop icon we chose! You can do this to any drive in your computer, or to removable drives such as USB flash drives.  When you change these drives icons, the new icon will appear on any computer you insert the drive into.  Also, if you wish to remove the icon change, simply run the Drive Icon Changer again and remove the icon path. Download Drive Icon Changer This application actually simply creates or edits a hidden Autorun.inf file on the top of your drive.  You can edit or create the file yourself by hand if you’d like; simply include the following information in the file, and save it in the top directory of your drive: [autorun]ICON=[path of your icon] Remove Arrow from shortcut icons Many people don’t like the arrow on the shortcut icon, and there are two easy ways to do this. If you’re running the 32 bit version of Windows Vista or 7, simply use the Vista Shortcut Overlay Remover. If your computer is running the 64 bit version of Windows Vista or 7, use the Ultimate Windows Tweaker instead.  Simply select the Additional Tweaks section, and check the “Remove arrows from Shortcut Icons.” For more info and download links check out this article: Disable Shortcut Icon Arrow Overlay in Windows 7 or Vista Closing: This gives you a lot of ways to customize almost any icon on your computer, so you can make it look just like you want it to.  Stay tuned for more great desktop customization articles from How-to Geek! Similar Articles Productive Geek Tips Change Start Menu to Use Small Icons in Windows 7 or VistaResize Icons Quickly in Windows 7 or Vista ExplorerRoundup: 16 Tweaks to Windows Vista Look & FeelRestore Missing Desktop Icons in Windows 7 or VistaClean Up Past Notification Icons in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer

    Read the article

  • SQLAuthority News – Memories at Anniversary of SQL Wait Stats Book

    - by pinaldave
    SQL Wait Stats About a year ago, I had very proud moment. I had published my second book SQL Server Wait Stats with me as a primary author. It has been a long journey since then. The book got great response and it was widely accepted in the community. It was first of its kind of book written specifically on Wait Stats and Performance. The book was based on my earlier month long series written on the same subject SQL Server Wait Stats. Today, on the anniversary of the book, lots of things come to my mind let me share a few here. Idea behind Blog Series A very common question I often receive is why I wrote a 30 day series on Wait Stats. There were two reasons for it. 1) I have been working with SQL Server for a long time and have troubleshoot more than hundreds of SQL Server which are related to performance tuning. It was a great experience and it taught me a lot of new things. I always documented my experience. After a while I found that I was able to completely rely on my own notes when I was troubleshooting any servers. It is right then I decided to document my experience for the community. 2) While working with wait stats there were a few things, which I thought I knew it well as they were working. However, there was always a fear in the back of mind that what happens if what I believed was incorrect and I was on the wrong path all the time. There was only one way to get it validated. Put it out in front community with my understanding and request further help to improve my understanding. It worked, it worked beautifully. I received plenty of conversations, emails and comments. I refined my content based on various conversations and make it more relevant and near accurate. I guess above two are the major reasons for beginning my journey on writing Wait Stats blog series. Idea behind Book After writing a blog series there was a good amount of request I keep on receiving that I should convert it to eBook or proper book as reading blog posts is great but it goes not give a comprehensive understanding of the subject. The very common feedback from users who were beginning the subject that they will prefer to read it in a structured method. After hearing the feedback for more than 4 months, I decided to write a book based on the blog posts. When I envisioned book, I wanted to make sure this book addresses the wait stats concepts from the fundamentals and fill the gaps of blogs I wrote earlier. Rick Morelan and Joes 2 Pros Team I must acknowledge my co-author Rick Morelan for his unconditional support in writing this book. I had already authored one book before I published this book. The experience to write the book was out of the world. Writing blog posts are much much easier than writing books. The efforts it takes to write a book is 100 times more even though the content is ready. I could have not done it myself if there was not tremendous support of my co-author and editor’s team. We spend days and days researching and discussing various concepts covered in the book. When we were in doubt we reached out to experts as well did a practical reproduction of the scenarios to validate the concepts and claims. After continuous 3 months of hard work we were able to get this book out in the community. September 1st – the lucky day Well, we had to select any day to publish the books. When book was completed in August last week we felt very glad. We all had worked hard and having a sample draft book in hand was feeling like having a newborn baby in our hand. Every time my books are published I feel the same joy which I had when my daughter was born. The feeling of holding a new book in hand is the (almost) same feeling as holding newborn baby. I am excited. For me September 1st has been the luckiest day in mind life. My daughter Shaivi was born on September 1st. Since then every September first has been excellent day and have taken me to the next step in life. I believe anything and everything I do on September 1st it is turning out to be successful and blessed. Rick and I had finished a book in the last week of August. We sent it to the publisher (printer) and asked him to take the book live as soon as possible. We did not decide on any date as we wanted the book to get out as fast as it can. Interesting enough, the publisher/printer selected September 1st for publishing the book. He published the book on 1st September and I knew it at the same time that this book will go next level. Book Model – The Most Beautiful Girl We were done with book. We had no budget left for marketing. Rick and I had a long conversation regarding how to spread the words for the book so it can reach to many people. While we were talking about marketing Rick come up with the idea that we should hire a most beautiful girl around who acknowledge our book and genuinely care for book. It was a difficult task and Rick asked me to find a more beautiful girl. I am a father and the most beautiful girl for me my daughter. This was not a difficult task for me. Rick had given me task to find the most beautiful girl and I just could not think of anyone else than my own daughter. I still do not know what Rick thought about this idea but I had already made up my mind. You can see the detailed blog post here. The Fun Experiments Book Signing Event We had lots of fun moments along this book. We have given away more books to people for free than we have sold them actually. We had done book signing events, contests, and just plain give away when we found people can be benefited from this book. There was never an intention to make money and get rich. We just wanted that more and more people know about this new concept and learn from it. Today when I look back to the earnings there is nothing much we have earned if you talk about dollars. However the best reward which we have received is the satisfaction and love of community. The amount of emails, conversations we have so far received for this book is over thousands. We had fun writing this book, it was indeed a very satisfying journey. I have earned lots of friends while learning and exploring. Availability The book is one year old but still very relevant when it is about performance tuning. It is available at various online book stores. If you have read the book, do let me know what you think of it. Amazon | Kindle | Flipkart | Indiaplaza Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • Xobni Free Powers Up Outlook’s Search and Contacts

    - by Matthew Guay
    Want to find out more about your contacts, discover email trends, and even sync Yahoo! email accounts in Outlook?  Here’s how you can do this and more with Xobni Free. Email is one of the most important communications mediums today, but even with all of the advances in Outlook over the years it can still be difficult to keep track of conversations, files, and contacts.  Xobni makes it easy by indexing your emails and organizing them by sender.  You can use its powerful search to quickly find any email, find related messages, and then view more information about that contact with information from social networks.  And, to top it off, it even lets you view your Yahoo! emails directly in Outlook without upgrading to a Yahoo! Plus account.  Xobni runs in Outlook 2003, 2007, and 2010, including the 64 bit version of Outlook 2010, and users of older versions will especially enjoy the new features Xobni brings for free. Getting started Download the Xobni Free installer (link below), and run to start the installation.  Make sure to exit Outlook before installing.  Xobni may need to download additional files which may take a few moments. When the download is finished, proceed with the install as normal.  You can opt out of the Product Improvement Program at the end of the installation by unchecking the box.  Additionally, you are asked to share Xobni with your friends on social networks, but this is not required.   Next time you open Outlook, you’ll notice the new Xobni sidebar in Outlook.  You can choose to watch an introduction video that will help you quickly get up to speed on how Xobni works. While this is playing, Xobni is working at indexing your email in the background.  Once the first indexing is finished, click Let’s Go! to start using Xobni. Here’s how Xobni looks in Outlook 2010: Advanced Email Information Select an email, and now you can see lots of info about it in your new Xobni sidebar.   On the top of the sidebar, select the graph icon to see when and how often you email with a contact.  Each contact is given an Xobni rank so you can quickly see who you email the most.   You can see all related emails sorted into conversations, and also all attachments in the conversation, not just this email. Xobni can also show you all scheduled appointments and links exchanged with a contact, but this is only available in the Plus version.  If you’d rather not see the tab for a feature you can’t use, click Don’t show this tab to banish it from Xobni for good.   Searching emails from the Xobni toolbar is very fast, and you can preview a message by simply hovering over it from the search pane. Get More Information About Your Contacts Xobni’s coolest feature is its social integration.  Whenever you select an email, you may see a brief bio, picture, and more, all pulled from social networks.   Select one of the tabs to find more information.  You may need to login to view information on your contacts from certain networks. The Twitter tab lets you see recent tweets.  Xobni will search for related Twitter accounts, and will ask you to confirm if the choice is correct.   Now you can see this contact’s recent Tweets directly from Outlook.   The Hoovers tab can give you interesting information about the businesses you’re in contact with. If the information isn’t correct, you can edit it and add your own information.  Click the Edit button, and the add any information you want.   You can also remove a network you don’t wish to see.  Right-click on the network tabs, select Manage Extensions, and uncheck any you don’t want to see. But sometimes online contact just doesn’t cut it.  For these times, click on the orange folder button to request a contact’s phone number or schedule a time with them. This will open a new email message ready to send with the information you want.  Edit as you please, and send. Add Yahoo! Email to Outlook for Free One of Xobni’s neatest features is that it let’s you add your Yahoo! email account to Outlook for free.  Click the gear icon in the bottom of the Xobni sidebar and select Options to set it up. Select the Integration tab, and click Enable to add Yahoo! mail to Xobni. Sign in with your Yahoo! account, and make sure to check the Keep me signed in box. Note that you may have to re-signin every two weeks to keep your Yahoo! account connected.  Select I agree to finish setting it up. Xobni will now download and index your recent Yahoo! mail. Your Yahoo! messages will only show up in the Xobni sidebar.  Whenever you select a contact, you will see related messages from your Yahoo! account as well.  Or, you can search from the sidebar to find individual messages from your Yahoo! account.  Note the Y! logo beside Yahoo! messages.   Select a message to read it in the Sidebar.  You can open the email in Yahoo! in your browser, or can reply to it using your default Outlook email account. If you have many older messages in your Yahoo! account, make sure to go back to the Integration tab and select Index Yahoo! Mail to index all of your emails. Conclusion Xobni is a great tool to help you get more out of your daily Outlook experience.  Whether you struggle to find attachments a coworker sent you or want to access Yahoo! email from Outlook, Xobni might be the perfect tool for you.  And with the extra things you learn about your contacts with the social network integration, you might boost your own PR skills without even trying! Link Download Xobni Similar Articles Productive Geek Tips Speed up Windows Vista Start Menu Search By Limiting ResultsFix for New Contact Group Button Not Displaying in VistaGet Maps and Directions to Your Contacts in Outlook 2007Backup Windows Mail Messages and Contacts in VistaHow to Import Gmail Contacts Into Outlook 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • SQLAuthority News – Pluralsight Course Review – Practices for Software Startups – Part 2 of 2

    - by pinaldave
    This is the second part of the two part series of Practices for Software Startup Pluralsight Course. Please read the first part of this series over here. The course is written by Stephen Forte (Blog | Twitter). Stephen Forte is the Chief Strategy Officer of the venture backed company, Telerik. Personal Learning Schedule After these three sessions it was 6:30 am and time to do my own blog.  But for the rest of the day, I kept thinking about the course, and wanted to go back and finish.  I was wishing that I had woken up at 3 am so I could finish all at one go.  All day long I was digesting what I had learned.  At 10 pm, after my daughter had gone to bed, I sighed on again.  I was not disappointed by the long wait.  As I mentioned before, Stephen has started four to six companies, and all of them are very successful today. Here is the video I promised yesterday – it discusses the importance of Right Sizing Your Startup. The Heartbeat of Startup – Technology Stephen has combined all technology knowledge into one 30 minute session.  He discussed  how to start your project, how to deal with opinions, and how to deal with multiple ideas – every start up has multiple directions it can go. He spent a lot of time emphasized deciding which direction to go and how to decide which will be the best for you.  He called it a continuous development cycle. One of the biggest hazards for a start-up company is one person deciding the direction the company will go, until down the road another team member announces that there is a glitch in their part of the work and that everyone will have to start over.  Even though a team of two or five people can move quickly, often the decision has gone too long and cannot be easily fixed.   Stephen used an example from his own life:  he was biased for one type of technology, and his teammate for another.  In the end they opted for his teammate’s  choice , and in the end it was a good decision, even though he was unfamiliar with that particular program.  He argues that technology should not be a barrier to progress, that you cannot rely on your experience only.  This really spoke to me because I am a big fan of SQL, but I know there is more out there, and I should be more open to it.  I give my thanks to Stephen, I learned something in this module besides startups. Money, Success and Epic Win! The longest, but most interesting, the module was funding your start-up.  You need to fund the start-up right at the very beginning, if not done right you will run into trouble.  The good news is that a few years ago start-ups required a lot more money – think millions of dollars – but now start-ups can get off the ground for thousands.  Stephen used an example of a company that years ago would have needed a million dollars, but today could be started for $600.  It is true that things have changed, but you still need money.  For $600 you can start small and add dynamically, as needed.  But the truth is that if you have $600, $6000, or $6 million, it will be spent.  Don’t think of it as trying to save money, think of it as investing in your future.   You will need money, and you will need to (quickly) decide what you do with the money: shares, stakeholders, investing in a team, hiring a CEO.  This is so important because once you have money and start the company, the company IS your money.  It is your biggest currency – having a percentage of ownership in the company.  Investors will want percentages as repayment for their investment, and they will want a say in the business as well.  You will have to decide how far you will dilute your shares, and how the company will be divided, if at all.  If you don’t plan in advance, you will find that after gaining three or four investors, suddenly you are the minority owner in your own dream.  You need to understand funding carefully.  This single module is worth all the money you would have spent on the whole course alone.  I encourage everyone to listen to this single module even if they don’t watch any of the others.     Press End to Start the Game – Exists! The final module is exit strategies.  You did all this work, dealt with all political and legal issues.  What are you going to get out of it? The answer is simple: money.  Maybe you want your company to be bought out, for you talent to bring you a profit.  You can sell the company to someone and still head it.  Many options are available.  You could sell and still work as an employee but no longer own the company.  There are many exit strategies.  This is where all your hard work comes into play.  It is important not to feel fooled at any step.  There are so many good ideas that end up in the garbage because of poor planning, so that if you find yourself successful, you don’t want to blow it at this step!  The exit is important.  I thought that this aspect of the course was completely unique, and I loved Stephen’s point of view.  I was lost deep in thought after this module ended.  I actually took two hours worth of notes on this section alone – and it was only a three hour course.  I am planning on attending this course one more time next week, just to catch up on all the small bits of wisdom I’m sure I missed. Thank you Stephen for bringing your real world experience with us!  I recommend that everyone attends this course, even if they don’t want to begin their own start-up company. It was indeed a long day for me. Do not forget to read part 1 of this story and attend course Practices for Software Startup Pluralsight Course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >